Hi, I'm using the 'brnn' package and I would like to know more about the structure of the network that I've trained. In particular, I'm interested in the weights. By using the command out$theta I can see a list of the weights. In my case, I've got one output neuron, two hidden neurons and 13 input neurons. As there seem to be two bias neurons, there are 30 weights in total. In the list that is displayed I can only see two columns with 15 weight values each and I'm wondering which value belongs to which weight exactly. Because I would like to know more about the importance of my 13 input variables.

I would be very grateful for help. Thanks in advance. Editha

# Bayesian regularized neural network structure

From `help(brnn)`

```
$theta
A list containing weights and biases. The first s components of the list contains vectors with the estimated parameters for the k-th neuron, i.e. ($w_k, b_k, β_1^{[k]},...,β_p^{[k]})'.$
```

when you looked at the `str`

you saw two lists of equal length. The first *should* be weights and the second biases for each neuron in the model, since that's the order described. Run some of the examples given and see if your out$theta is similar in form. It's possible that the model was mis-specified.

Hi there, thank you very much for your answer!

I tried all the examples and I found that each column in the $theta list corresponds to a hidden neuron. I'm still not sure which links the weights correspond to and how the bias neurons connect to the other neurons. I guess it could be that the first weights in each column belong to the links from the input layer to the respective hidden neuron, the next weights belong to the links from the hidden neuron to the output layer and the last weight belongs to the bias neuron that connects to the hidden neuron. Not sure though, it's not very clear in my opinion.

Thank you again for your reply!

I think you've read the docs correctly. Theta gives the weights and the bias gives the deviation from what's expected from the previous state. NN is notorious for hidden layers and something of a black box. Although I'm no expert, it don't think it's the right method to assess the relative importance of the input variables for just that reason. Good luck!

Yes, I agree with you. This is only one of many algorithms that I'm using so as you say, for getting information on the variable importance, I'm now going to concentrate more on the other algorithms. Thank you!

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.