Visualising neural network architectures

I got tired of manually drawing neural network architectures, so I coded a custom ggplot-function capable of the below visualisation and I'm curious if people would be interested in seeing this as a shiny-app?

7 Likes

Hi,

This seems like a very interesting idea!
It might actually be even more interesting as a package where people can use it to generate their own figures when doing projects.

I've got a couple of questions / suggestions:

  • It might be handy to add an argument that lets you shorten the visuals. Often times, the number of inputs is very high, but you just want to show the layers and how they are connected. It's probably not the easiest to implement though :slight_smile:
  • As the image above shows, some additional annotation like layer and input/output info might help
  • Also, the non-linear function used in the hidden layers (e.g. ReLU) and the one in the final layer (e.g. softmax) can be interesting annotation
  • Are you planning to add visualizations for other architectures than fully connected laters? Again, that's a challenge with just ggplot...

Great work! Keep it up :slight_smile:
PJ

1 Like

Excellent input - Thank you very much @pieterjanvc :+1:

So, did this so far: https://leonjessen.shinyapps.io/nnvizRt/

Hopefully, I will find some time to augment...

Just adding my two cents, but it could be really useful to write a helper function that accepts as an argument a keras/tensorflow model object and converts the architecture to the necessary data frame format needed to plot the neural network with ggplot. Allowing users to recycle their model object directly to form a plot would be appealing, I think.

An example pipeline could be:

model <- 
  keras_model_sequential() %>% 
  layer_dense(units = 12, input_shape = c(12)) %>% 
  layer_dense(units = 1)

model_architecture <- helper_function(model)

model_architecture %>%
  ggplot(...)

Basically convert the below structure to a data.frame or tibble. I haven't though through the implementation much, just a random thought.

Model
________________________________________________________________
Layer (type)                Output Shape              Param #   
================================================================
dense_1 (Dense)             (None, 12)                156       
________________________________________________________________
dense_2 (Dense)             (None, 1)                 13        
================================================================
Total params: 169
Trainable params: 169
Non-trainable params: 0
________________________________________________________________

Hi @mattwarkentin,

Yes, I thought exactly about that, so it's nice, that I'm confirmed in my thought - Cheers for input! Excellent!

1 Like

Now I also feel confirmed that my thought wasn't a bad one! I'm happy to help or contribute code if interested.

Excellent - I'll create a github repo and get back to you asap. How might I reach you the easist @mattwarkentin? :slightly_smiling_face:

@Leon Sounds good! I would probably prefer email correspondence, if that is okay with you. I will send you a direct message with my information.

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.