How does the brain map out an environment?
‘Smellovision’ virtual reality has proven that the brain can form a map of space based solely on smell. How far can virtual reality take us in understanding spatial mapping in a multi-sensory environment?
Brad Radvansky received a BSc in Neuroscience and Physics from the University of Pittsburgh (PA, USA) studying electrically-conducting polymers for neural electrode biocompatibility. He then did his Master’s work on network-level plasticity in cultured neurons at Georgetown University (DC, USA). Currently, he is pursuing a Neuroscience PhD at Northwestern University (IL, USA) in the lab of Dan Dombeck. His project involves the development and application of olfactory virtual reality (VR) methods to determine how sensory features of an environment can inform the cognitive map of space in the rodent hippocampus.
Dan Dombeck received his BSc in Physics at the University of Illinois (IL, USA). He earned his PhD in Physics at Cornell University (NY, USA) in the lab of Watt Webb, where he developed nonlinear optical techniques to measure the structure and function of living neurons. He then trained as a postdoc in the lab of David Tank at Princeton University (NJ, USA) where he developed techniques to perform cellular resolution functional imaging in awake mice navigating in VR environments. Dan is now an associate professor of neurobiology at Northwestern University where his lab develops novel imaging and behavioral techniques that they use to understand how the mammalian brain forms, stores and recalls memories of everyday experiences.
Please can you provide a brief overview of your research?
Dan: My lab studies spatial navigation in rodents – how animals get from point A to point B in their environments. We study a few different brain regions to try and understand the activity patterns in the brain that help animals know where they are along the path from point A to point B; how they retrieve memories to get from point A to point B and how they form memories to get between these different points.
We study a specific type of neuron in the hippocampus, called place cells, which fire at a specific location when you’re navigating around your environment. For example, if you’re walking down the hallway to go to the water fountain, there are neurons in your hippocampus that fire at every particular location along that path. These cells form a map of the environment and the idea is that these cells can be also be reactivated offline when you close your eyes and think about walking to the water fountain.
We think it must be the same place cells that are reactivated in the hippocampus to help you recall that memory. These cells aren’t just involved in forming a map to help you navigate online but they’re also involved in forming a memory of the experiences and places that you’ve been to. We’re trying to understand how these place cells form and what different inputs drive the firing of these cells.
In my lab, we use VR systems and laser scanning microscopes to study these place cells in rodents. We head fix rodents whilst they run on a free-floating Styrofoam ball. Their movements on the ball typically control a visual VR system around them so they can navigate in VR. The reason for the head fixation is so that we can image down into the hippocampus with really high resolution to visualize the hippocampal place cells and try to see what inputs are driving the cells; as well as how the cells change when we manipulate their environment.
The idea of this ‘smellovision’ project was to try to decompose the different sensory inputs that drive place cells. Most of our experiments and almost all experiments in VR so far, have just been using visual VR systems. We really wanted to look at how different sensory modalities are integrated to form a map of space, so we added olfactory cues in order to look at olfactory inputs driving place cells and visual inputs driving place cells.
How did you go about developing the technology that enabled a smell to define a position?
Brad: It’s actually quite difficult to control smells. Just think about any kind of smell in the real world – it’s a chemical concentration in the air that is affected by diffusion and turbulence. It’s difficult to create a precise and quantifiable landscape of smells in the real world. But in VR, we can do it more easily.
The trick is to rapidly deliver odorants to the nose of an animal as it’s running around in VR. We use some devices called mass flow controllers that will proportionally open a valve based on how much voltage goes down them. We program these to open the valves a certain amount based on an animal’s position in the virtual world. Using this we can create static odor landscapes of one point in the world that’s defined exactly by a certain odor profile.
Dan: This hadn’t been done before – nobody had made an olfactory odorant delivery system that could operate on fast enough time scales and long enough for a behavioral session. There was a lot of engineering that went into building the system and making this possible.
What are the key implications of your research?
Brad: We’ve made this ‘smellovision’ and we can now control a new sensory modality precisely in virtual space. Technologically, it’s something interesting and useful. But it has many future applications for neuroscience research that would be very interesting as well, for example, in multi-sensory integration.
There is a map of space in the brain but the brain doesn’t have any spatial detectors. However, we can see and hear and we use our senses to construct this concept of space. Using a multi-sensory VR we could try to piece apart how these different features of space combine to make this map of space in the brain.
Dan: One big question we were able to answer directly was whether the brain can form a map of space based only on olfactory cues. That’s something that had never been tested before because it’s very difficult to set up an experiment in the real world where olfactory cues are the only thing defining the environment.
You can try to make olfactory cues define the environment and shut the lights off; you can try to take away sounds but the animal can still feel its way around the environment – there are always other sensory cues available to animals in the real world.
It wasn’t possible to test if olfactory cues on their own could generate a map of space so you need a virtual environment in order to take away all other sensory cues. We can shut off the lights in VR and there are no auditory or tactile cues that can tell the animals where they are. They have to rely on their smell to build this map of space and then we could image down into the hippocampus and see that the animals were in fact building a map of this space based only on these smells.
What’s the next step for you? How will you develop this further?
Brad: Next, we will head in the direction of multi-sensory integration. In this paper, we showed that with the lights off and using just odor gradients, the mouse can form a map of space in its brain. But, what we didn’t really show is how the mouse can navigate a multi-sensory environment, comprised of both vision and olfaction. We really want to piece apart this multi-sensory world to see what the map of space in the brain is made out of.
Dan: Another advantage to doing that in VR is that you can do what are called cue conflict experiments in which there is a defined environment based on visual cues and olfactory cues and you can suddenly shift one with respect to the other. You can flip the sides of the tracks – for example, if one side of the track smells like bubblegum and it has certain visual cues then all of a sudden, you can flip it so that the other side of the track smells like bubblegum.
We can image into the animal’s brain at the same time and watch what happens to this map as we do these cue conflict experiments. Are there cells that follow only the olfactory cues; are there cells that follow only the visual cues? Or are all of these cells building this map based on both of the sensory cues and the whole map just falls apart when we do these cue conflict experiments?
These are very difficult experiments to do in the real world if you want to shift one sensory cue with respect to the other or you want to flip one with respect to the other – it’s not really clear how to do that in the real world.
Brad: This is possibly the only way to control two sensory spatial variables at the same time. So, I think it’s a really powerful way to deconstruct what this map in the brain is really made out of.
Do you think that the future of neurological studies could be in VR?
Dan: There are a lot of different advantages to VR studies as well as drawbacks and there are also a lot of different advantages to real-world studies. I think that VR studies and real-world studies complement each other very much. The two main advantages to VR studies are:
- You can control the environment of the animal very precisely and you can do manipulations of VR in ways that you can’t do in real-world environments, like the cue conflict experiments, or only having one sensory cue and taking all the others away, or warping animals from one location to another, which you can’t do in the real world.
- The animal’s head is fixed in space, so you can use really powerful cutting-edge optical and electrophysiological techniques that are very difficult to implement in freely-moving animals. It allows you to study the brain at a spatial scale resolution and speed that’s very difficult in real-world experiments.
On the other hand, there are things that are very difficult to study in VR that are much better studied in real-world experiments, like social interactions. If you want to study two animals interacting with each other, VR has a long way to go to get to that point. I don’t think either one of VR or real-world studies is going to disappear – I think they’re going to go hand-in-hand.
If you had any advice for researchers that were hoping to follow in this line of research, what would it be?
Dan: A lot of researchers are getting involved in VR and they should think hard about the differences of VR compared with real-world environments – they’re not the same and you need to design your experiments properly in VR in order to cause the effects or the behaviors that you’re looking for.
One of the big differences is that the concept of having consequences is different in VR than real-world experiments. So, as an example, in real-world mazes that people often make for rodents, there are mazes called elevated mazes where the animal is on a thin track many feet up in the air and if they stop paying attention, they might fall off the track and get hurt. So, they’re always paying attention to make sure they know where they are so that they don’t fall off.
In VR experiments like ours, the animal is head-fixed, they’re running on a ball and there really aren’t consequences to the animal disengaging from the task or closing their eyes and running as fast as they can – they’re not going to fall off the edge of anything because they’re safe in this head-fixed floating ball apparatus.
In virtual environments, you really have to incentivize the animals in some way – you have to give them a reason to pay attention to the virtual simulation. We do that by giving them rewards that they really care about at particular locations in the track. If the animals aren’t motivated to get those rewards, they probably won’t care about the simulation that’s going on around them. So the attentional aspect is something that can be very different between real and VR simulations and researchers really need to pay attention to that.