Mapping Billions of Synapses with Microscopy and Mathematics

Information storage in the mammalian brain is a complex problem complicated by scale. It has long been understood that a few widely distributed synapses are responsible for encoding any one memory, but for decades visualizing and creating a memory map has been a nearly impossible task. That was before the laboratories of Drs. Gary Lynch and Christine Gall at the University of California, Irvine pioneered a technique that allows them to map learning-induced functional changes in individual synapses throughout the hippocampus. Their labs use a combination of widefield imaging techniques and image segmentation analysis to visualize these changes. Dr. Christopher S. Rex, Project Scientist, discusses the lab’s findings, and their broader implications for the field of learning and memory.



Studies on long-term potentiation (LPT)

Our lab studies learning and memory in the neocortex and hippocampus of rodents. We are interested in understanding how information is stored in the brain at the most fundamental levels. This has been a difficult question for the field to answer because not only is the information sparsely encoded (only a few synapses) but it is widely distributed as well (over many brain regions).

The number of synapses required to encode any one memory is extremely small. For example, if there are 1015 synapses in your brain, there may only be tens to hundreds of disperse synapses that will encode any single piece of information. This is the worst needle-in-a-haystack situation you can imagine. To address this problem our lab studies long-term potentiation (LTP). LTP is the physiological mechanism by which cells increase and maintain their synaptic strength within a short time period, in the order of seconds, but that can persist for days, months, or years.

Experimental approaches

Our work began with slice recordings, but measuring physiological activity alone was not enough to distinguish one synapse from another. This prompted our lab to search for the biological correlates to the observed physiological activity. We identified important second messenger signaling cascades or phosphorylation events that occur at isolated synapses when we induce LTP.

Specifically we focused on structural events such as cytoskeletal changes that occur at the processes of neurons. We identified and visualized enzymes that regulate these LTP-induced cytoskeletal-structural events. This finding is very important because it allows us to visualize the locations where information is encoded on the level of individual synapses.

The lab began its work in electrophysiology, stimulating neocortical and hippocampal slices with an electrode to induce LTP. We and others showed that the structural changes that occur with LTP appear to be permanent, but that the second messenger signals (phosphoprotein levels) are temporary. Immediately following stimulation, therefore, we fixed the tissue and labeled it with phosphoprotein-specific antibodies.

In doing this we had a functional marker of LTP-like activity at individual synapses that had occurred within the last five minutes or so. We also counterstained the slices with an antibody that is specific for PSD95, a protein that is found only at the postsynaptic density of excitatory synapses. This allowed us to identify the locations of the synapses themselves. In this study we found that we had a very nice correlation between the number of synapses that had high levels of phophoproteins and LTP.

From there we bridged into the learning studies. Although we had found a very good marker for LTP with artificial stimulation we wanted to see if and where these same events occurred in the brains of animals under naturalistic learning circumstances.

To address this, our lab developed a learning paradigm called unsupervised learning. It is different from conditioned learning or associative learning in that the animals don’t have explicit cues given by the experimenters; the animals are allowed to roam a complex environment for 30 minutes. Then we rapidly freeze the brain and prepare sections.

Measuring and mapping synapses

Initially we targeted the hippocampus because we knew that it was involved in this form of learning, but we focused only on very small regions of the hippocampus because the task of mapping the information was so labor-intensive. We found that the animals that went through the learning environment had a greater number of synapses with dense levels of phosphoprotein indicating LTP-like activity at synapses within the hippocampus.

Since then we have developed something extremely exciting, in connection with the bigger question we had, namely where exactly is this information being stored? We have expanded the learning study to take contiguous regions across an entire section of hippocampus. For this study, we worked closely with Leica Microsystems to develop automated microscopy so that we could scan across an entire section at high resolution.

We combined this with a complex analytical system that we have developed in-house. To give you an idea of how many synapses we are examining, for the very first LTP and learning studies that we published we measured a few thousand synapses. For the subsequent papers we measured up to one million synapses. And now for one section of hippocampus we are measuring about 200 million synapses, and for a whole hippocampus we are measuring well over a billion. That’s for one animal. So for one study hundreds of billions of synapses are being measured and mapped, which is really remarkable.

Microscopy and mathematics

We had been using an upright epifluorescence microscope, acquiring dual channel, three micron-thick z-sections in order to reconstruct 3D immunofluorescent images. The challenge for us was to maintain our optical sections at varying depths in the tissue section to match the optimal plane of labeling. To do this we needed an autofocus that could focus on the micron-sized, individual spheres that are our labeled synapses. For this we found Metamorph’s algorithms to be perfectly suited. Additionally this software provided the flexibility to go from adjacent locations and find the optimally labeled plane. We tried a number of programs, and we even tried to write our own software for this before we found Metamorph. 

Now we are collecting images around the clock 24 hours a day in order to collect enough data to generate large scale maps. We found that using the internal filter turret slowed us down and caused too much wear and tear on the microscope, so we invested in fast external filter wheels. We now have assembled a system that has very low mechanical wear and tear and can run robustly and reliably for days on end.

Using this new system we developed a workflow where Metamorph performs the acquisition and as soon as the images are acquired we use custom-written software to send it to our cloud analytical system, which begins the analysis immediately. The images are deconvolved and then they are analyzed by image segmentation, which is all in-house custom designed software that we’ve built over the past three or four years. In short we’ve gone from a study that originally took about six months to image and analyze to one that we can do in about a week. The increased speed makes it possible for us to map large brain regions.

In our image segmentation analysis we’re looking for objects that are approximately 200 nm in diameter. We perform the image segmentation separately on each channel. We then identify colocalization, better termed coplacement, by comparing the boundaries that are identified for each object. We have found that the numbers of excitatory synapses containing dense phosphoprotein labeling is in the order or 2–5 %; it is a very sparse signal as we would expect from a high-capacity memory system.

We used confocal microscopy in our first publication, but we have since elected to use widefield microscopy coupled with deconvolution because the acquisition is much faster. We know that the resolution we are able to obtain is not as good as what we could get with a laser scanning system, but we are willing to accept that trade-off in order to have the speed and efficiency that our current system permits.

We have also found that the elements we are looking at, dimly labeled elements that photobleach quickly, are not identifiable with confocal microscopy. Deconvolution, on the other hand, preserves the light making it easier to identify the synapses. 

Outlook and conclusion

There are a number of things on the horizon for this project. We will be introducing multiple learning paradigms and expanding the number of brain regions we analyze. We are also always trying to increase the efficiency of the system.

For a long time there’s been the belief, supported by evidence from lesion studies, that different brain regions are responsible for different forms of learning. But we did not know whether the information associated with that learning is stored in those brain regions, or whether those regions perform a function that is necessary either for the system or for that particular form of learning. Our lab will continue to strive to develop an empirically based understanding of how and where information is processed in the brain in order to address these questions.

Interested to know more?

Talk to our experts. We are happy to answer all your questions and concerns.

Contact Us

Do you prefer personal consulting?