Image Classification Using Elman Neural Network on Master-Slave Architecture

2012 
Objective of this work is the classification of images using Elman Neural Network (ENN). ENN is one of the simplest supervised multi layer neural networks. We train the network with parallel algorithm on Master - Slave architecture to improve the performance. The performance of both sequential implementation and parallel implementations are evaluated. The parameters include speed-up, optimal number of processors and processing time. Digital image processing is a collection of techniques to manipulate digital images by computers. Classification generally comprises four steps: l. Pre-processing 2. Training 3. Decision and 4 Assessing. The objective of image classification is to identify the features occurring in an image in terms of the object these features actually represent. The studies were conducted on image Data Set consisting of 115 images of five classes(rockets, missiles, jets, aero planes and helicopters). Images are of size 400x200 in a scale of 256 values and each pixel of each image was scaled in the range 0-1. Several classification algorithms have been developed from maximum likelihood classifier to neural network classifiers. This study emphasizes on the analysis and usage of a neural network classifier based on Elman neural network. Artificial Neural Network (ANN) can provide suitable solutions for problems, which are characterized by non-linear data, high dimensionality noisy, complex, imprecise, and imperfect or error prone sensor data, and lack of a clearly stated mathematical solution or algorithm. The advantage with ANN is that we can design a model of the system from the available data. In this work image classification is done by applying the back propagation algorithm on Elman neural network. Elman neural network is feed forward network with an input layer, a hidden layer, an output layer and a special layer called context layer. The output of each hidden neuron is copied into a specific neuron in the context layer. The value of the context neuron is used as an extra input signal for all the neurons in the hidden layer one time step later (6). In an Elman network, the weights from the hidden layer to the context layer are set to one and are fixed because the values of the context neurons have to be copied exactly. Furthermore, the initial output weights of the context neurons are equal to half the output range of the other neurons in the network. The Elman network can be trained with gradient descent back propagation and optimization methods (7-8). ENN network trained using BP algorithm can be parallelized by partitioning the number of patterns, or by partitioning the network or by combination. In general, parallel processing can be achieved in two different ways - data set parallelism and algorithmic parallelism (3-5). Algorithmic parallelism offers two possible schemes for the MLP: (i) mapping each layer of the network to a processor and (ii) mapping a block of neurons to a processor. A special case of second scheme is to map each neuron of the network to a processor so that the parallel computer becomes a physical model of the network. The proposed parallel implementation avoids recomputation of weights and requires less communication cycle per pattern. The communication of data among the processors in the computing network is also less. We obtain the performance parameters like speed-up, optimal number of processors and processing time for both sequential implementation and parallel implementation on Master - Slave architecture. The analytical and experimental results show that the proposed parallel implementation performs better than the sequential implementation. II. Elman Neural Network Let us consider a fully connected ENN trained using BP algorithm. Let N0 be the number of neurons in the input layer. Similarly N l , l=1,2,3 be the number of neurons in hidden, output and context layers respectively. The different phases in the learning algorithm and corresponding processing time are discussed in the following sections (10).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    0
    Citations
    NaN
    KQI
    []