Character Recognition And The Neural Network Biology Essay

Character acknowledgment is an country of pattern acknowledgment in which assorted researches are traveling on.Many work on character acknowledgment on several linguistic communications such as Tamil, Japanese, Arabic, Chinese, Hindi and Farsi have been done but still handwritten English character and word acknowledgment utilizing Neural webs is an unfastened job. In India, English every bit good as Hindi linguistic communications are used in offices.

Nervous web plays an of import function in pattern acknowledgment. Neural web recognize handwritten characters. It uses back-propagation method to develop and prove handwritten characters.

In this work, compass operator perform operation on the gathered samples of the character.The compass operator brighten each border by traveling in counterclockwise way. This operator organize a brighten border of each character by altering its gradient values. Following phase is feature extraction through Fourier Descriptor. Different composing manner have different angle and are of different size that create disagreement and this job is solved by Fourier form. The Fourier form forms a skeleton of a character. The Style of character that lies inside the skeleton is than extracted. Experimental consequence of this thesis reveals that Fourier Descriptor characteristic extraction technique provides truth about 96 % and besides requires less categorization and preparation clip.

Motivation

Many plants of character acknowledgment on several linguistic communications such as Tamil, Arabic, Chinese, Hindi and Farsi have been done but still there is a job in acknowledging different linguistic communication characters. We know that every state have their ain linguistic communication but English is a planetary linguistic communication. Therefore, it has been taken as large challenge to develop method for acknowledgment of English character with high truth and less training clip.

Aim

The aim of this thesis is described as:

Compass operator used to lighten up each border of character image. The compass operator works for black and white images every bit good as for coloured images. This operator increases the character acknowledgment truth and less training clip and categorization clip.

Fourier form signifier skeleton of an image character. The characteristics are extracted from closed boundary hint. There are many characteristics that can be used to depict closed boundary hint, the Fourier coefficients was chosen so that they are invariant with regard to interlingual rendition, rotary motion and size of similar characters.

Performance of compass operator with Fourier form enhances character acknowledgment truth every bit good as reduces developing clip and categorization clip.

Contribution:

Compass operator with Fourier form

In this attack, a 30X30 black and white image in the binary image format has been taken as input. The pel covering the form of character holding values 1 and remainder have values 0. The compass operator change the pel values change overing the form of character 1 by 1,2,3,4……………..8, and remainder are non change. In other words, compass operator change the gradient values of each border that are non brighten. The end product of the compass operator works as input for Fourier form. Fourier form forms a boundary around input image character. This combined attack is non merely for individual linguistic communication but for all linguistic communications that are trained by nervous web.

Scanned character are so stored in separate array holding size 30X30. Hence by uniting recognized single characters we can make a new recognized word.

Chapter 2

NEURAL NETWORK

Target

Nervous webs are composed of simple elements runing in analogue. These elements are inspired by biological nervous systems. The connexions between elements mostly determine the web map. We can develop a nervous web to execute a peculiar map by seting the values of the connexions ( weights ) between elements. Typically, nervous webs are adjusted, or trained, so that a peculiar input leads to a specific mark end product. The web is adjusted, based on a comparing of the end product and the mark, until the web end product matches the mark. Typically, many such input/target braces are needed to develop a web. Nervous webs have been trained to execute complex maps in assorted Fieldss, including pattern acknowledgment, designation, categorization, address, vision, and control systems. Nervous webs can besides be trained to work out jobs that are hard for conventional computing machines or human existences.

Input signal

End product

Nervous Network including connexions ( called weights )

between nerve cells

Adjust weights

Figure 2.1 Basic construct of Neural Network

A nervous web has a parallel-distributed architecture with a big figure of nodes and connexions A node represented as a nerve cell and pointer represented as way of signal flow. Each node is connected to one another and these nodes are associated with weight.

2.1 Artificial Neuron

An Artificial nerve cell theoretical accounts are based on biological features. An Artificial nerve cell receives a set of inputs and so each input is multiplied by a weight, this is correspondent to synaptic strength. The amount of all leaden inputs determines the grade called the activation degree. In nervous web, connexion weights are referred to as LTM ( long term memory ) and activations are referred to as STM ( short term memory ) . Each input Xi is modulated by weight Wi and entire input is expressed as

Figure:2.1 Artificial Neuron

NET= ? Xi.Wi

Where X= [ X1, X2, ………. , Xn ] and W = [ W1, W2, ……… , Wn ] .

2.2 Activation map

The basic operation of an unreal nerve cell involves summing its leaden input signal and using an end product or activation map. Typically the same activation map is used for all nerve cells in any peculiar bed of a nervous web, although this is non required. In most instances a nonlinear activation map is used. The advantage of multilayer web requires nonlinear maps in comparing to individual bed web.

2.3 Identity map:

F ( x ) = ten for all ten

Single bed web frequently use a measure map to change over the web, to an end product unit that is a binary ( 1 or 0 ) or bipolar ( 1 or -1 ) signal. The binary measure map is besides known as the threshold map or Heaviside map.

Figure 2.3 Identity map

2.4 Binary measure map ( with threshold ? )

F ( x ) = 1 if x ? ?

if x & A ; lt ; ?

Sigmoid map ( S – shaped curves ) is utile activation maps. The logistic map and the inflated tangent maps are most common. They are particularly advantageous for usage in nervous web trained by backpropagation because the simple relationship between the value of the map at a points and the value of the map at a point and the value of the derived function at that points reduces the computational load during preparation.

The logistic map, sigmoid map with scope from 0 to 1.It is frequently used as activation map for nervous web in which desired end product values are in the interval between 0 and 1. This is known as binary sigmoid or logistic sigmoid.

F ( x ) 1

1 2 3 4 ten

Figure 2.5 Binary sigmoid map

2.6 Signum map

This is besides known as the Quantizer map. This map is a uninterrupted map that varies bit by bit between -1 and +1. The map F is defined as

F ( x ) = +1 x & A ; gt ; ?

-1 x ? ?

Output F ( ten )

Input signal

+1

0 ten

-1

? Threshold

Figure 2.6 Signum Function

2.1.7 Hyperbolic tangent map

The map is given by

F ( x ) = tanh ( ten )

It can bring forth negative end product values.

2.2Neural Network Architecture

The nervous web refers to its model every bit good as interrelated strategy. The model is frequently specified by the figure of beds and the figure of nodes per bed. Inputs to the web are presented to the input bed. Input units do non treat information.They merely administer information to other units. Outputs are generated as signals of the end product bed. An input vector applied to the input bed and generates an end product signal vector across the end product bed. Now, these signals may go through through one or more intermediate or concealed beds.This transforms the signals depending upon the neuron signal map.

2.2.1 Single bed nervous web

The basic architecture of the simplest possible nervous webs that perform pattern categorization. It consist of bed of input units and a individual end product unit. In individual bed nervous web a prejudice acts precisely as a weight on a connexion from a unit whose activation is ever 1. As increasing prejudice every bit good as additions the web input to the unit. If prejudice is taken topographic point so the activation map is taken as

F ( cyberspace ) = 1 if net ? 0

-1 if net & A ; lt ; 0

Where cyberspace = B + ? xiwi

Now consider the separation of the input infinite into part where the response to the web is positive and the parts where the response is negative. The boundary between the input value x1 and x2 for which the web gives the positive response and the value with the negative response by detached line

B + x1w1 + x2w2 = 0

x2 = ( -w1/w2 ) x1 – ( b/w2 )

The demand for a positive response from the end product unit is that the web input it receives

B +x1w1 +x2w2

greater than 0.During developing values of w1, w2 and B are determine so the web give correct response for the preparation informations.

B

Input Unit Output Unit

Figure2.2.1 Single bed nervous web

2.2.2Algorithm for Single bed nervous web

Measure 0. Initialize all weights:

Wisconsin = 0 ( one =1 to Ns )

Measure 1. For each input developing vector and mark end product brace, s: T, do stairss 2-4.

Measure 2. Set activations for input units:

eleven = Si ( one =1 to Ns )

Measure 3. Set activations for end product units:

Y = T

Measure 4. Adjust the weights for

Wisconsin ( new ) = Wisconsin ( old ) + xiy ( one =1 to Ns )

Adjust the prejudice:

B ( new ) = B ( old ) + Y

The prejudice is adjusted precisely like a weight from a unit whose end product is ever 1.

2.3 Multilayer Neural Network

The multilayer architecture treating an input and an end product bed besides have one or more intermediate beds called concealed bed. The concealed bed executing calculation before directing the input to the end product layer.The input bed nerve cells are connected to the concealed bed nerve cell and each connected nexus holding some weight. This web is used in thesis work for developing informations.

Input layer Hidden bed Output bed

Figure 2.3 Multilayer Neural Network

Chapter 3

Backpropagation Neural Network

3.1 Introduction

A multilayer web is a web with one or more beds of nodes between the input units and end product units. There is weight between two next degrees of units. Multilayer webs can work out more complicated jobs than individual bed web but preparation are more hard. Training can be more successful in multilayer than individual bed.

An effectual general method of developing a multilayer nervous web is a backpropagation nervous web. A backpropagation web can be used to work out jobs in many countries. The preparation of a web by backpropagation involves three phases:

The feedforward of the input preparation form,

The computation and backpropagation of the associated mistake,

The accommodation of the weights.

After preparation of the web involves calculation of the feedforward stage. If preparation is slow so merely it produces its end product really quickly. Numerous fluctuations of backpropagation have been developed to better the velocity up the preparation procedure.

3.2 Training algorithm for backpropagation

Measure 0 Initalize weights.

Measure 1 While halting status is false, do step 2-9.

Measure 2 For each preparation brace, do step 3-8.

Feedforward:

Measure 3 Each input unit ( Xi, i= 1, ….. , n ) receives input signal eleven and broadcasts this signal to all units in the Hidden bed.

Measure 4 Each hidden unit ( Zj, j=1, …. , P ) sum its leaden input signals,

z_inj = voj + ? xivij,

Applies its activation map to calculate its end product signal,

zj = degree Fahrenheit ( z_inj ) ,

And sends this signal to all units in the bed above.

Measure 5 Eack end product unit ( Yk, k = 1, …. , m ) sum its leaden input signals,

y_inj = wok + ?zjwjk,

Applies its activation map to calculate its end product signal,

yk= degree Fahrenheit ( y_ink ) ,

Backpropagation of mistake:

Measure 6 Eack end product unit ( Yk, k = 1, …. , m ) receives a mark form matching to the input preparation form, computes its mistake information term,

?k = ( tk -yk ) degree Fahrenheit ‘ ( y_ink ) ,

Calculates its weight rectification term

a?†wjk = ??kzj

Calculates its bais rectification term

a?†wok = ??k

And sends ?k to units in the bed below

Measure 7 Each hidden unit ( Zj, J = 1, ……… , P ) sums its delta inputs

?_inj = ? ?kwjk

Multiplies by the derived function of its activation map to cipher its mistake information term,

?j = ?_inj degree Fahrenheit ‘ ( z_inj )

calculates its weight rectification term

a?†vij = ??jxj

And calculates its bais correctionterm

a?†voj = ??j

Update weights and prejudices:

Measure 8 Each end product unit ( Yk, k= 1, ….. , m ) updates its prejudice and weight ( j = 0, ….. , P )

wjk ( new ) = wjk ( old ) + a?†wjk

Each concealed unit ( Zj, J = 1, …… , P ) updates its prejudice and weights ( one = 0, …… , N )

vij ( new ) = vij ( old ) + a?†vij

Measure 9 Test halting status.

3.3 Multilayer Neural Network Architecture in MATLAB

3.3.1Neuron Model ( logsig, tansig, purelin )

Each input is weighted with an appropriate w. The amount of the leaden inputs and the prejudice forms the input to the transportation map f. Neurons can utilize any differentiable transportation map degree Fahrenheit to bring forth their end product.

Figure 3.3.1 Neural web theoretical account

3.3.2 Log-Sigmoid Transfer Function

Multilayer webs frequently use the log-sigmoid transportation map logsig.

Figure 3.3.2 Log-sigmoid transportation map

3.3.3 Tan-Sigmoid Transfer Function

The map logsig generates end products between 0 and 1 as the nerve cell ‘s web input goes from negative to positive eternity.

Figure 3.3.3Tan-sigmoid transportation map

Multilayer webs can utilize the tan-sigmoid transportation map tansig.

3.3.4 Linear Transfer Function

Sigmoid end product nerve cells are frequently used for pattern acknowledgment jobs, while additive end product nerve cells are used for map adjustment jobs. The additive transportation map purelin is

Figure 3.3.4Purelin transportation map

Feedforward Network

A single-layer web of S logsig nerve cells holding R inputs is shown below in full item on the left and with a bed diagram on the right. Feedforward webs frequently have one or more concealed beds of sigmoid nerve cells followed by an end product bed of additive nerve cells. Multiple beds of nerve cells with nonlinear transportation maps allow the web to larn nonlinear relationships between input and end product vectors. The additive end product bed is most frequently used for map adjustment ( or nonlinear arrested development ) jobs. On the other manus, if you want to restrain the end products of a web ( such as between 0 and 1 ) , so the end product bed should utilize a sigmoid transportation map ( such as logsig ) . This is the instance when the web is used for pattern acknowledgment jobs ( in which a determination is being made by the web ) .

Input signal

Layer of logsig Nerve cells

Input signal

Layer of logsig Nerve cells

a= logsig ( Wp+b ) a= logsig ( Wp+b )

Figure 3.3.4.1a Single bed Feedforward web Figure 3.3.4.1bLayered Feedforward web

Output Layer

Hidden Layer

Input signal

a1 = tansig ( IW1,1p1 +b1 )

Figure 3.3.4.2 Multilayer Feedforward web

Chapter 4

Feature Extraction

Introduction

Input variables combined together to organize a smaller figure of new variables is referred as characteristic. The procedure of bring forthing characteristic is known as characteristic extraction.Feature Extraction is used in handwritten character acknowledgment in order to act upon the acknowledgment public presentation.When the processed input informations from a character image is excessively big than the input informations will be transformed into a decreased set of characteristics.

In this thesis study, assorted characteristic extraction techniques have been studied for illustration conventional characteristic extraction, Gradient characteristic extraction, Directional distance characteristic extraction etc. A new characteristic extraction technique that has been developed by the research worker in this thesis is Compass operator with Fourier form characteristic extraction. This feature extraction technique has been implemented for English character acknowledgment that provides high acknowledgment truth and decreased preparation clip.

4.1Conventional Feature Extraction

In the conventional or Global pel method a part is constituted by the character of an image. An image is represented by an array of pels, which carries an associated value that is X. The value of X ranges from 0 for a wholly white pel and 1 for a wholly black pel. It is necessary to hive away the pel value for single character. The major drawback of conventional method is that when same character written by different individual is different, the same character written by same individual on different clip is besides different, so peculiar character can non be recognized. It besides requires big memory infinite to hive away pel values.

4.2Gradient Feature Extraction

The gradient operator, named Compass operator is used in this thesis to cipher the gradient values. The compass border sensor is an appropriate manner to gauge the magnitude and orientation of an border. Although differential gradient border sensing needs a instead time-consuming computation to gauge the orientation from the magnitudes in the x- and y-directions, the compass border sensing obtains the orientation straight from the meat with the maximal response. This method is used to revolve the Prewitt ‘s and Sobel ‘s mask in all the possible waies. This mask is rotated in anticlock way.

Figure 4.1. Compass masks

This operator is known as the compass operator and is really utile for observing weak borders and gives equal brightness all over the borders. This is the major advantage over Sobel and Prewitt operator. The compass operator works on black and white image every bit good as on coloured image. The black and white image show clean end product on artworks window but it bit by bit change its gradient values.

M ( x, Y ) = [ a ( 1 ) *I ( x-1, y-1 ) +a ( 2 ) *I ( x-1, Y ) +a ( 3 ) *I ( x-1, y+1 ) + a ( 4 ) * I ( x, y-1 ) +a ( 5 ) *I ( x, y ) + a ( 6 ) *I ( x, y+1 ) +a ( 7 ) * I ( x+1, y-1 ) +a ( 8 ) * I ( x+1, y ) +a ( 9 ) *I ( x+1, y+1 ) ] ;

N ( x, Y ) = [ B ( 1 ) *I ( x-1, y-1 ) +b ( 2 ) *I ( x-1, Y ) +b ( 3 ) *I ( x-1, y+1 ) + B ( 4 ) * I ( x, y-1 ) +b ( 5 ) *I ( x, y ) +b ( 6 ) * I ( x, y+1 ) +b ( 7 ) *I ( x+1, y-1 ) +b ( 8 ) * I ( x+1, y ) +b ( 9 ) * I ( x+1, y+1 ) ] ;

O ( x, Y ) = [ degree Celsius ( 1 ) *I ( x-1, y-1 ) +c ( 2 ) *I ( x-1, Y ) +c ( 3 ) *I ( x-1, y+1 ) + degree Celsius ( 4 ) * I ( x, y-1 ) + degree Celsius ( 5 ) *I ( x, y ) +c ( 6 ) *I ( x, y+1 ) +c ( 7 ) * I ( x+1, y-1 ) + degree Celsius ( 8 ) * I ( x+1, y ) +c ( 9 ) * I ( x+1, y+1 ) ] ;

P ( x, Y ) = [ vitamin D ( 1 ) *I ( x-1, y-1 ) +d ( 2 ) *I ( x-1, Y ) +d ( 3 ) *I ( x-1, y+1 ) + vitamin D ( 4 ) * I ( x, y-1 ) +d ( 5 ) *I ( x, y ) +d ( 6 ) *I ( x, y+1 ) +d ( 7 ) * I ( x+1, y-1 ) +d ( 8 ) * I ( x+1, y ) + vitamin D ( 9 ) * I ( x+1, y+1 ) ] ;

Q ( x, Y ) = [ vitamin E ( 1 ) *I ( x-1, y-1 ) +e ( 2 ) *I ( x-1, Y ) +e ( 3 ) *I ( x-1, y+1 ) +e ( 4 ) *I ( ten, y-1 ) +e ( 5 ) *I ( x, y ) +e ( 6 ) *I ( x, y+1 ) +e ( 7 ) * I ( x+1, y-1 ) +e ( 8 ) *I ( x+1, y ) +e ( 9 ) *I ( x+1, y+1 ) ] ;

R ( x, Y ) = [ degree Fahrenheit ( 1 ) *I ( x-1, y-1 ) +f ( 2 ) *I ( x-1, Y ) +f ( 3 ) *I ( x-1, y+1 ) +f ( 4 ) *I ( ten, y-1 ) +f ( 5 ) *I ( x, y ) +f ( 6 ) * I ( x, y+1 ) +f ( 7 ) *I ( x+1, y-1 ) +f ( 8 ) *I ( x+1, y ) +f ( 9 ) * I ( x+1, y+1 ) ] ;

S ( x, Y ) = [ g ( 1 ) *I ( x-1, y-1 ) +g ( 2 ) *I ( x-1, Y ) +g ( 3 ) *I ( x-1, y+1 ) + g ( 4 ) *I ( ten, y-1 ) +g ( 5 ) *I ( x, y ) +g ( 6 ) *I ( x, y+1 ) +g ( 7 ) * I ( x+1, y-1 ) + g ( 8 ) *I ( x+1, y ) + g ( 9 ) * I ( x+1, y+1 ) ] ;

T ( x, Y ) = [ H ( 1 ) *I ( x-1, y-1 ) +h ( 2 ) *I ( x-1, Y ) +h ( 3 ) *I ( x-1, y+1 ) +h ( 4 ) * I ( x, y-1 ) + H ( 5 ) *I ( x, y ) +h ( 6 ) *I ( x, y+1 ) +h ( 7 ) * I ( x+1, y-1 ) +h ( 8 ) *I ( x+1, y ) +h ( 9 ) *I ( x+1, y+1 ) ] ;

W=max ( soap ( soap ( soap ( soap ( soap ( soap ( M, N ) , O ) , P ) , Q ) , R ) , S ) , T ) ;

Fig.4.2 Experimental consequence of coloured image

4.3 Fourier Descriptor Feature Extraction

Fourier Descriptor calculates the K-points digital boundary in the xy- plane. Get downing at an arbitrary point ( x0, y0 ) , coordinate points ( x0, y0 ) , ( x1, y1 ) , ( x2, y2 ) , …… , ( xK-1, yK-1 ) are encountered in tracking the boundary in anticlockwise way. These co-ordinates can be expressed in the signifier ten ( K ) = xk and Y ( K ) = yk. The boundary can be represented as the sequence of co-ordinates s ( K ) = [ x ( K ) , y ( K ) ] , for k= 0, 1,2, …… , K-1. Each co-ordinate point can be treated as complex figure so that

s ( K ) = ten ( K ) + jy ( K ) …………………………….. ( 1 )

where K =0,1,2, ……….. , K-1. The x-axis is treated as the existent axis and the y-axis as the fanciful axis of a sequence of complex Numberss. Although the reading of the sequence was recast, the nature of the boundary itself was non changed. This represents one great advantage that is to cut down a 2-D to a 1 -D job.

The distinct Fourier transmutation ( DFT ) of s ( K ) is

a ( u ) =1/K ( ? s ( K ) e-j2?uk/K ) ……………………… . ( 2 )

where u= 0,1,2, …………….K-1.The complex coefficients a ( u ) are called the Fourier forms of the boundary. The opposite Fourier transform of the coefficient restores s ( K )

s ( K ) = ? a ( u ) e-j2?uk/K ……………………… . ( 3 )

where K = 0,1,2, ………… , K-1.Suppose alternatively of all the Fourier coefficients, merely first P coefficients are used. This is tantamount to puting a ( u ) =0 for u & A ; gt ; P-1 in equation ( 3 ) .The consequence is the undermentioned estimate to s ( K ) :

P-1

siˆ? ( K ) = ? a ( u ) e-j2?uk/K ……………………… . ( 4 ) u=0

where K = 0,1,2, ………… , K-1. Although merely P footings are used to obtain each constituent of siˆ? ( K ) , k still ranges from 0 to K-1. The same figure of points issues in the approximative boundary, but non as many footings are used in the Reconstruction of each point.

Original ( K=64 ) P = 2 P = 8 P = 64

Figure 4.3 Reconstruction of boundary utilizing Fourier coefficients

This figure shows a square boundary consisting of K = 64 points and the consequences of utilizing equation ( 4 ) to retrace this boundary for assorted values of P. When P=61, the curves begin to unbend, which leads to an about exact reproduction of the original one extra coefficient. Thus a few low-order coefficients are able to capture gross form, but many more high order footings are required to specify accurate form characteristics such as corner and consecutive lines. This consequence is non unexpected in position of the function played by low and high frequence constituents in specifying the form of a part.

.

Chapter 5

Execution of Feature Extraction Techniques

This chapter describes the assorted characteristic extraction techniques viz. Compass Gradient Feature Extraction, Fourier Descriptor Feature Extraction and Compass Gradient Feature Extraction with Fourier Descriptor for handwritten English character acknowledgment.

5.1 Compass Gradient Feature Extraction

Fig.5.1. Flowchart

Procedure

5.1.1Perform The Normalization Process On Scanned Character

I am be aftering to scan each character at 300 pels per inch utilizing Scanner HP-Scan Jet 11. The scanned character will be converted into 4096 ( 64×64 ) double star pels. The skeletonization procedure will be used to binary pixel image and the excess pels which were non belonging to the anchor of the character, will be deleted and the wide shots will be reduced to thin lines. Skeletonization is illustrated in following Figure 2.

A Figure 5.1.1 Skeletonization of a English character

There are batch of fluctuations in scripts of different individuals.After skeletonization procedure, we need normalization procedure, that normalized the character into 30X30 pixel character and shifted to the top left corner of pel window.

5.1.2Perform Binarization On Capture Image Of 30 X 30 Pixel.

After skeletonization and standardization procedures were used for each character, binarizing the normalized image into 30 X 30 matrixes. The black pels contain value 1 and white pels contain value 0.

5.1.3 Execution of Compass Operator

Execution of Compass operator on 30 ten 30 matrix for gradient computation. The gradient values will be decomposed into a clockwise way and acquiring 9 distinct gradient values.

Fig.5.1.2.Experimental consequence of coloured image

5.1.4 Training informations on Neural Network

Now 900 tens 1 column matrixes is input to the compass operator. The 900 ten 1 column matrixes are given as input to the provender frontward nervous web. The bid used for provender forward web named net = train ( net, I, g ) where I is the input matrix and g is used for end. The end for developing a input matrix is set as 2 x 1 matrix.The end should be set in [ 0 1 ] ‘ or [ 10 ] ‘ signifier.

Figure 5.1.3 Experimental consequence of Training informations

5.1.5 Goal Meet

The gradient of consecutive preparation is less than 10-10 is run intoing when end of web preparation is 10-5.

5.1.6 Testing informations on Neural Network

The bid for simulation consequence is out = sim ( net, I ) .

5.2 Fourier Descriptor Feature Extraction

Fig.5.2.1 Flowchart

Procedure

5.2.1Perform The Normalization Process On Scanned Character.

I am be aftering to scan each character at 300 pels per inch utilizing Scanner HP-Scan Jet 11. The scanned character will be converted into 4096 ( 64×64 ) double star pels. The skeletonization procedure will be used to binary pixel image and the excess pels which were non belonging to the anchor of the character, will be deleted and the wide shots will be reduced to thin lines. Skeletonization is illustrated in following Figure 5.2.2.

A Figure 5.2.2. Skeletonization of a English character

There are batch of fluctuations in scripts of different individuals.After skeletonization procedure, we need normalization procedure, that normalized the character into 30X30 pixel character and shifted to the top left corner of pel window.

5.2.2Perform Binarization On Capture Image Of 30 X 30 Pixel.

After skeletonization and standardization procedures were used for each character, binarizing the normalized image into 30 X 30 matrixes. The black pels contain value 1 and white pels contain value 0.

5.2.3 Feature Extraction by Fourier Descriptor

In Feature Extraction, first pull out the boundary of a given character.Secondly calculate Fourier form on extracted boundary of the given character.

Fig.5.2.3.Experimental consequence

5.2.4 Training informations on Neural Network

Now 900 tens 1 column matrixes is input to the compass operator. The 900 ten 1 column matrixes are given as input to the provender frontward nervous web. The bid used for provender forward web named net = train ( net, I, g ) where I is the input matrix and g is used for end. The end for developing a input matrix is set as 2 x 1 matrix.The end should be set in [ 0 1 ] ‘ or [ 10 ] ‘ signifier.

Figure 5.2.4. Experimental consequence of Training informations

5.2.5 Goal Meet

The gradient of consecutive preparation is less than 10-10 is run intoing when end of web preparation is 10-5.

5.2.6 Testing informations on Neural Network

The bid for simulation consequence is out = sim ( net, I ) .

5.3 Combined analysis of Fourier Descriptor and Compass operator

When Fourier form combined with compass operator so it increases truth and besides decreases clip for acknowledging character. First take the handwritten scanned character and normalized it. Second normalized character is taken as input to the compass operator. Compass operator brighten each border one by one and at last all border of the given character are brighten. Now take the boundary of the character and so calculate the Fourier form.

Fig.5.3.1.Flow Chart

Fig.5.3.2.Experimental consequence of combined attack

5.3.1 Training informations on Neural Network

Now 900 tens 1 column matrixes is input to the compass operator. The 900 ten 1 column matrixes are given as input to the provender frontward nervous web. The bid used for provender forward web named net = train ( net, I, g ) where I is the input matrix and g is used for end. The end for developing a input matrix is set as 2 x 1 matrix.The end should be set in [ 0 1 ] ‘ or [ 10 ] ‘ signifier.

Figure 5.3.3. Experimental consequence of Training informations

Chapter 6

Consequence and Discussions

6.1 Introduction

In this chapter comparing of combined attack of Compass operator with Fourier form against Fourier form Feature extraction and Compass operator Feature extraction. The comparing has been made in footings of preparation clip, categorization clip acknowledgment truth and figure of loops.

6.1.1 Compass operator Feature Extraction

An analysis of experimental consequence has been performed and shown in table 6.1.1

Input signal to MLPN

No. of Hidden Unit of measurements

No. of Iterations

Training Time ( sec )

Categorization Time ( MS )

Performance on Training set ( % )

Performance on Test Set ( % )

30×30 gradient input

12

50

34.813

0.125

100

95

Table:6.1.1 Result of handwritten English Character utilizing Back propogation web

This technique requires more clip to develop the web and necessitate more categorization clip. The consequences of construction analysis shows that if the figure of concealed nodes increases the figure of loops taken to acknowledge the handwritten character is besides additions.

6.1.2 Fourier Descriptor Feature Extraction

Five 100 samples were collected from 10 individual, 50 samples each, out of which 250 samples were used for preparation ( developing informations ) and 250 samples were used for proving the information ( test information ) .

An analysis of experimental consequence has been performed and shown in table 6.1.2.

No. of Hidden nodes ( nerve cells )

Learning Rate

Momentum Factor

No. of Epochs

Recognition %

Training Set

Trial Set

12

0.2

0.8

50

100

89

24

0.2

0.8

100

100

94

36

0.2

0.8

200

100

94

Figure 6.1.2 Consequence of Handwritten English Character utilizing MLP

6.1.3 Fourier Descriptor Feature Extraction with Compass operator

Five 100 samples were collected from 10 people ( male and female ) , 50 samples each, out of which 300 samples were used for preparation ( developing informations ) and 200 samples were used for proving the information ( test information ) .

An analysis of experimental consequence has been performed and shown in table 6.1.3.

No. of Hidden nodes ( nerve cells )

preparation clip

Categorization clip ( MS )

No. of Epochs

Recognition %

Training Set

Trial Set

12

0.1406

59.922

89

100

96

Figure 6.1.3 Consequence of Handwritten English Character

The Bar chart representation of the comparative analysis of three different techniques in term of Recognition truth as shown in below figure 6.1

Figure 6.1 Bar chart of Recognition Accuracy