|
|
|||||
|
Neural Net model proposed by Ravi Manda and Dr. Sudhir Trivedi |
|||||
|
|
|||||
|
Any NN employing Back Propagation methods has two phases of computation. During the forward pass, Neurons in each layer receive input and generate an output depending on the transfer function across them; these outputs are then multiplied by the respective weights. All such weighted values are summed up and fed as input for the neurons at the next layer. At the output layer, the difference between the computed outputs and the target values generates the error corresponding to each output node. In the Back Propagation of error: The error generated at the output layer is fed back through the layers in order to update the weights. The error is treated exactly like input for the purpose of back propagation. The transfer function is applied at the neurons of each layer as the error is back propagated. Thus, the forward pass has is charged with the task of computing the final outputs for the purpose of computing the error. The crux of this model lies in replacing part of the NN with the equation block, which takes as input the coefficients calculated by the previous layers of the NN together with the current state of the aircraft as defined by the input parameters at the same time step as the inputs to the NN. The outputs of this equation block are then used to generate the error. Since the forward pass plays no part in the updating of the weights, it should not matter where the error is originated. The back propagation ensures that whatever error is fed at the output layer is minimized as training progresses, by way of updating the weights accordingly. Check out the Closed Loop System for this model. Proposed Neural Net model. |
|||||
|
|
|||||
|
|
|||||
|
|
|||||
|
|
|||||
|
Please send your comments/suggestions to me at: [email protected] |
|||||
|
This page has been visited |