SOM-mask layer for sparse activation

Posted on March 25, 2018

SOM is one of my favorite model in traditional neural networks. It’s widely used to cluster and visualize data. I previously made a short summary on SOM.

My belief is that the emergence of SOM in our brain can save us energy when doing the inference as well as the learning. SOM can map the raw representation into different areas and use different subnetworks to handle different types of tasks. Rather than entangling every tasks in one network, it can first handle a task in a hierarchical way. And based on the observation that the activation in a NN is usually sparse, we can relate those activated neurons in a semantical space and ignore those irrelevant neurons before computing their output.

SOM-mask layer can be categorized as one type of dropout layer. The classical dropout layer uses a random mask and other variances of dropout filters the input using threshold, hash table or other metrics. In a similar way, SOM-mask filters neurons based on the clustering results. Neurons sharing similar patterns will be activated together.

This project is currently paused and I share some preliminary results with you:

None