Notes on “A unified theory for the origin of grid cells through the lens of pattern formation”

I just read the paper, I’m still processing it, but here are some of my notes, thoughts and questions.

https://papers.nips.cc/paper/2019/hash/6e7d5d259be7bf56ed79029c4e621f44-Abstract.html

3 Main models types in neuroscience:

Descriptive explanations delineate an abstract characterization of a
phenomenon, while mechanistic and normative explanations bridge abstractions of different levels.
Mechanistic explanations show how a phenomenon emerges from lower-level components (e.g., how
circuit effects emerge from cellular or molecular properties), while normative explanations appeal to the
phenomenon’s ability to perform higher-level functions (e.g., why sensory circuits are needed to support
behavioral control).

“We observe that the structure of the grid patterns depends sensitively on the shape of the place
cell tuning curves.”

“A non-negativity constraint induces hexagonal grids. [10] observed that imposing a nonnegativity constraint on the output of their 1-layer neural network changed the learned grid patterns
from square to hexagonal.”

“We further show that due
to the translation-invariance of place cell responses, the learning dynamics of this position encoding
objective can be formulated as a pattern forming dynamical system, allowing us to understand the
nature and structure of the resultant grid-like solutions and their dependence on various parameters.”

I need to study continuous attractor models more.

“Finally, a growing body of work has explored experimentally the hypothesis that MEC encodes
continuous variables other than position, such as sound pitch [24] or abstract quantities like the
width and height of a bird [25]. While we have referred to a “position” encoding objective and
“path” integration, we note that our theory actually holds for generic continuous variables. That
is, we would expect networks trained to keep track of sound pitch and volume to behave the same
way. Perhaps, intriguingly, grid like structure may be relevant for neural processing in even more
abstract domains of semantic cognition [25, 26]. Overall, the unifying pattern formation framework
we have identified, that spans both normative and mechanistic models, affords a powerful conceptual
tool to address many questions about the origins, structure, variability and robustness of grid-like
representations in the brain”

So now that we see that 1 layer NN can reproduce grid like patterns, how can we use those neurons?

I would want to develop a test that uses grid patterns similar to DeepMind’s use of RL and see if we could use both the 1 layer NN and RNN interchangeably.

Leave a comment

Your email address will not be published. Required fields are marked *