Notes on “Symmetry-Based Representations for Artificial and Biological General Intelligence”

The paper: https://arxiv.org/abs/2203.09250

Its from a team at DeepMind with the purpose of pleaing neuroscientist to look for symmetry-based representations in the brain. From the paper: “The idea that there exist transformations (symmetries) that affect some aspects of the system but not others, and their relationship to conserved quantities has become central in modern physics, resulting in a more unified theoretical framework and even ability to predict the existence of new particles. Recently, symmetries have started to gain prominence in machine learning too, resulting in more data efficient and generalisable algorithms that can mimic some of the complex behaviours produced by biological intelligence. Finally, first demonstrations of the importance of symmetry transformations for representation learning in the brain are starting to arise in neuroscience.”

The idea of the paper is that the idea of symmetry transformations in physics completely revolutionized our understanding of physics and that this same idea can transform biological and artificial intelligence. The reason it makes sense to turn to physics is because intelligence evolved within the constraints of our physical world and so the tasks of AI are also constrained by the physical world. The study of symmetries in physics originates from Noetther’s Theorom in 1915, which proved that every conservation law is grounded in a corresponding continuous symmetry transformation. Since the introduction of Noether’s theorem, symmetry transformations have
permeated the field at every level of abstraction, from microscopic quantum models to macroscopic astrophysics models. By studying symmetry transformations, physicists have been able to reconcile explanatory frameworks, systematically describe physical objects and discover new ones. The idea of connecting more of physics to intelligence makes a lot of sense to me as the idea of grounded language learning is the idea that language is connected to physical world.

from wikipedia

The authors argue that neuroscience should move from studying static objects to studying representations in terms of natural symmetry transformations. Machine learning researchers have already been investigating symmetry learning for a while and many of the deep learning successes can be attributed to symmetries.

They explain symmetry with an example:

“Given a task, there often exist transformations of the inputs that should not affect it. For example, if one wants to count the number of objects on a table, the outcome should not depend on the colours of
those objects, their location or the illumination of the scene. In that case, we say the output produced by an intelligent system when solving the task is invariant with respect to those transformations.
Since the sensory input changes with transformations, while the output is invariant, we need to decide what should happen to the intermediate representations. Should they be invariant like the output or should they somehow transform similarly to the input?”

Intuitively, you can think of symmetries as reducing the problem space to a much smaller space that a learning algorithm needs to learn from. This makes the neural network able to work on data in a more efficient manner. Another technique that machine learning researchers will do is to introduce symmetry into the dataset itself by rotating and flipping images for the model to train on. This does improve the model, but its been shown that building the symmetry into the neural network architecture is better.

They referenced a few studies where neuroscientists have found what looks like symmetry based representations in biological neurons, but a lot more research needs to occur (hence the paper).

They mention geometric conceptual spaces combined with grid cells which I have been studying recently: “It is hypothesised that the same principles that allow biological intelligence to navigate the physical space using the place and grid cells may also support navigation in cognitive spaces of concepts, where concepts are seen as convex regions in a geometric space spanned by meaningful axes like engine power and car weight (Balkenius and Gärdenfors, 2016; Gärdenfors, 2004; Gardenfors, 2014).”

One of the reasons the Edorhinal Cortex and grid cells are so great for us to study is that in the cognitive processing system of the brain, they appear far down in the processing stream, but we can directly change the input and measure the firing of the neurons. Grid cells fire like graphing paper acting like a map. With most of the other neurons, because we don’t fully understand the representation, it is very hard for us to change the input and understand the meaning of the neurons.

This difficulty of measuring the representation is discussed in the paper: “…the fact that designing stimuli for discovering interpretable tuning in single cells at the end of the sensory processing pathways is hard. While it is easy to systematically vary stimulus identity, it is hard to know what the other generative attributes of complex natural stimuli may be, and hence to create stimuli that systematically vary along those dimensions.”

Representation of spaces:

“In this view, a vector representation is seen as disentangled with respect to a particular decomposition of a symmetry group into a product of subgroups, if it can be decomposed into independent subspaces where each subspace is affected by the action of a single subgroup, and the actions of all the other subgroups leave the subspace unaffected…. In other words, the vector space of such a
representation would be a concatenation of independent subspaces, such that, for example, a change in size only affects the “size subspace”, but not the “position subspace” or any other subspace”.

TODO:

  • [read paper 1 more time]
  • [finish notes on mathemetical definition]
  • [invariance and equivarience]
  • [disentanglement and entaglement in ML]
  • [MLPs not working]
  • [exampler representation]

Leave a comment

Your email address will not be published. Required fields are marked *