I just finished reading “Grid-like Neural Representations Support Olfactory Navigation of a Two-Dimensional Odor Space” It’s an interesting paper that shows that olfactory(smell) seems to be represented with grid-like codes in the brain. The authors read and reference a bunch of papers where other researchers test if grid cells can represent other kinds of sensory data. There have already been multiple studies on abstract concepts, sound, 2D navigation, virtual navigation, and more. So in this paper they tried experimenting with a 2D odor space. For the experiment, they had humans trained to imagine moving in certain directions in an imagined grid world. At each location existed a certain combination of smells at different intensities: “Our findings reveal that two-dimensional arrays of odor intensities, which themselves cannot be topographically encoded as a set of Cartesian coordinates on the olfactory epithelial sheet, nevertheless map onto grid-like scaffolds that can support spatial orientation and route planning within an olfactory space.”
“The sense of smell is fundamentally a predictive sense. Each sniff represents an olfactory snapshot at a specific time and place and simultaneously represents a prediction of what odor is likely to be encountered on the next sniff, at the next time and place (Jacobs, 2012). The sense of smell is also a distance sense, as airborne odors can defy physical boundaries and the absence of light in ways that visual information cannot, providing a means of identifying and tracking remote sources (Gire et al., 2016).”
The paper referenced:
(From chemotaxis to the cognitive map: The function of olfaction) https://www.pnas.org/doi/full/10.1073/pnas.1201880109
“Our task was introduced to subjects as an ‘‘odor prediction’’ task, but the latent structure of the map was not revealed until after the experiment. Trajectories were defined using a start odor mixture, along with a visual instruction screen indicating how much the intensities of banana and pine in the mixture would change upon delivery of the end odor (Figures 1B, 1D, and S2). After a 6-s period of mental navigation along the specified trajectory, subjects received the end odor and indicated whether it matched their prediction. On 50% of trials, the end odor was on trajectory, and on 50% of trials, the end odor was off trajectory, varying by 15–60. Correct answers would be compatible with successful navigation.”
“To minimize the possibility that the observed grid-like effects could be attributed to visual stimulation, we ensured the positions of the odor columns (‘‘pine’’ and ‘‘banana’’) and the axis labels (‘‘more’’ and ‘‘less’’) were randomly alternated across trials, dissociating spatial changes in visual features from magnitude changes in odor features.”
“In this study, we tested the hypothesis that human subjects, using only the sense of smell, could navigate through a two-dimensional olfactory space. When provided with a start odor location and route (trajectory) instructions, subjects were able to imagine and predict their perceptual translocation in this odor space. Odor navigation was associated with hexagonal grid-like coding in vmPFC, APC, and ERC, with behavioral performance scaling with the robustness of entorhinal responses across subjects. These findings mirror the behavior relevance of grid-like units in navigation of physical and abstract spaces (Constantinescu et al., 2016; Doeller et al., 2010; Kunz et al., 2015) and highlight the idea that the human brain has access to internalized representations of odor mixture arrays to guide spatial orientation and route planning.”
“Furthermore, by randomizing the visual directions of the odor trajectory and testing a visual model, we eliminated the possibility that grid-like signals were influenced by gaze movement from the visual field (Julian et al., 2018; Killian et al., 2012; Nau et al., 2018). These control tests bring greater confidence to the idea that the human brain can create a 6-fold map out of systematic translations in olfactory perceptual space.”
“One striking finding is that, when human subjects chart their course through odor space, fMRI-based representations in ERC, vmPFC, and APC are generally aligned to the same grid angle. It is possible that different brain areas utilize hexagonal grid architectures to represent different types of mental maps, but for each of these areas to converge on the same preferred grid angle seems unlikely unless there was direct interareal coordination. Moreover, the few rodent studies testing for hexagonal profiles outside of ERC have only identified grid fields in the presubiculum and parasubiculum (Boccara et al., 2010), though single-neuron recordings from humans have reported grid-like spiking patterns in the cingulate cortex (Jacobs et al., 2013). Thus, a plausible alternative explanation would be that odor navigation engages hexagonally periodic activity in ERC, with feedback projections to vmPFC and APC signaling whether the subject is either on or off trajectory as they traverse through olfactory space. Information about angle alignment could be integrated with action-outcome contingencies in vmPFC to refine behavior and support more sophisticated cognitive maps (Schiller et al., 2015; Wikenheiser and Schoenbaum, 2016) and with olfactory information in APC to tag or strengthen a set of odor representations associated with the current trajectory.”
“In the rodent literature, an interplay between hippocampal place fields and entorhinal grid cells is thought to enhance and stabilize place cell activity (Hales et al., 2014; Langston et al., 2010; Wills et al., 2010) and may provide a mechanism for associating episodes, position, and velocities to predict future locations (Bush et al., 2014; Renno ́ -Costa and Tort, 2017; Sanders et al., 2015). Indeed, a computational simulation of spatial navigation showed that an artificial agent with a ‘‘grid network’’ incorporated into the system learns more efficiently than a control agent with only ‘‘place cells’’ available (Banino et al., 2018).”
The Banino paper is a DeepMind paper, so I will have to read that! “Vector-based navigation using grid-like representations in artificial agents”
“An inverse question can also be posed: why was the quadrature filter unable to identify 6-fold (univariate) effects in ERC? One possible explanation is that fMRI signal quality (temporal signal-to-noise ratio [tSNR]) is relatively weak in ERC, consistent with its position in an area of higher MRI signal artifact. Therefore, to examine whether lower overall ERC signal quality—in a given subject—tends to be associated with weaker ERC grid-like effects, we tested the hypothesis that, across subjects, ERC voxels with weaker tSNR would also have lower 6-fold effects based on the use of the quadrature filter (see STAR Methods).”
The results were not as strong as a expected, but it does show a grid-like code does fire for olfactory senses. As I read more papers about grid cells, I am seeing papers that believe grid cells are not exactly encoding position, but instead some kind of structure representation used in conjunction with place cells, I will write more about this later.