PDA

View Full Version : Neural Netwroks -- Activation layers



jloizides
07-14-2008, 06:08 AM
I'm using FeedForwardNetwork class in IMSL C#. I'm trying to understand how to use the different options available to me like the hiddenlayer activation. At the moment whe I use:

hiddenLayerActivation = Imsl.DataMining.Neural.Activation.Linear;
outputLayerActivation = Imsl.DataMining.Neural.Activation.Linear;

I get a sensible output. However when I change this to any of the others e.g. Logistic, Softmax, Tanh... I get back a costant value for the forecast.

At the moment I'm using a QuasiNewtonTrainer with 6 inputs and 2 hiddenlayers. The inputs and outputs are double's.

Is there anythings that I am missunderstanding. Can you help please?

John

totallyunimodular
07-14-2008, 12:34 PM
Hmmmm... Does the same behavior occur if you only have one hidden layer as opposed to two? Do nodes in the hidden layer(s) and output layer have the same activation function? What is the nature of the data/problem? What is the constant value? Is the value a different constant depending on the activation function, or is the constant value the same so long as the activation function is not linear? On the face if it, the behavior seems odd, but more context is needed to say anything helpful...