Presentation from Prof. Philippe Bijlenga at the European Association of Neurosurgical Societies (EANS) 2018 Congress on the Leica Microsystems booth.
Presentation from Prof. Philippe Bijlenga at the European Association of Neurosurgical Societies (EANS) 2018 Congress on the Leica Microsystems booth.

Microscopy Enhanced Navigation in Neurosurgery

Presentation from Prof. Philippe Bijlenga

Neurosurgical procedures are very complex and delicate, especially when they involve craniotomy. Surgeons need to be extremely focused and concentrated throughout. Navigation plays an essential role, helping to precisely execute surgical plans.

Prof. Phillippe Bijlenga is a Neurosurgeon at the Geneva University Hospitals in Switzerland. He is specialized in neurosurgical pathologies, in particular brain vessel lesions and skull lesions. At the European Association of Neurosurgical Societies (EANS) 2018 Congress, he shared his clinical experience using microscopy enhanced navigation, discussing the benefits of this approach.

Picture: Presentation from Prof. Philippe Bijlenga at the European Association of Neurosurgical Societies (EANS) 2018 Congress on the Leica Microsystems booth.

Presentation from Prof. Philippe Bijlenga at the European Association of Neurosurgical Societies (EANS) 2018 Congress on the Leica Microsystems booth.



Key Learnings

Prof. Bijlenga presented several cases including Arteriovenous Malformation (AVM) and brain tumor removal, showing how navigation with an Augmented Reality neurosurgical microscope can allow to better understand anatomical structures and how this assistance opens up new possibilities for neurosurgeons.

He uses the Leica M530 OHX surgical microscope, a precision microscope for neurosurgery and other complex procedures. Equipped with the groundbreaking FusionOptics technology, it unites an enhanced depth of field with high resolution to create an optimal view of the surgical field. The microscope integrates both the Brainlab Microscope Navigation software and the GLOW800 Augmented Reality fluorescence for Fluorescence Guided Surgery (FGS).

Fluorescence in neurosurgery offers critical information. In the case of an AVM removal for example, it can help preserve small en-passant vessels, which can be difficult to visualize and preserve otherwise according to Prof. Bijlenga.

Discover the video of his presentation below, as well as a full transcript. For more information on neurosurgery operating microscopes, contact a Leica representative. Our team will be happy to advise you on different options including the Leica M530 OHX microscope as well as the ARveo digital Augmented Reality neurosurgery operating microscope. The ARveo microscope from Leica Microsystems  provides a single, precise and augmented view of the surgical field in real time. Both the M530 OHX and the ARveo are compatible with Brainlab navigation.

Transcription of the Presentation

“I’m Professor Bijlenga, I work in Geneva University Hospitals as a neurosurgeon. I spent the last 5-6 years working on improving navigation systems using microscopes. Today I will present microscope-enhanced navigation. What I will present is an overview of a method we use.

Augmented Reality Skin Surface Registration 

[01:12 - 01:45]

The concept is actually to use the microscope to register. And you can use the microscope to register using the face of the patient, to have kind of a first alignment. Here, you can use the eyebrow, the nose, the ear, which gives you access. So, you can actually see if your navigation is shifted or if it’s rotated. And there is a way now to digitally correct that, manually, interacting with the navigation, to re-register.

Augmented Reality Bone Registration Landmarks

[01:46 - 03:00]

Then, what you can do is when you open the skin, you can expose bone, and on the bone, there are lots of landmarks. There are sutures, there are diploic veins, going through holes, there are special bumps and hills and valleys that you can follow. In blue here are the usual landmarks we use to actually make a very precise registration of the navigation, meaning that on the imaging we recognize those features and, on the patients, we recognize the same features, and realign the navigation according to those features.

Now, this is for the outside but when you open the skull, you still have bone inside. And here again there are classical features you can use to re-register. Classically, you will have here the sphenoid ridge, which you can be drilling and you can readjust. You will have here most probably the meningeal artery groove, which you can find. And in the posterior fossa, you can have the internal auditory canal for example.

Augmented Reality Signature Vessel Registration

[03:01 - 03:58]

Then, once you have access to the brain itself, the brain is full of vessels. And here is actually the concept that you can recognize vessels, and each vessel has bifurcation which has a very typical and unique shape. You can take advantage of that shape to actually register using signature structures. So here is the concept that you have the image you see: on the right side you have the digital image reconstruction of the vessel and you can see that there is here a Y shape which is the same. This is the signature structure you can register. The idea is to make an edge detection and to get in the future a computer doing that for you, so that the computer recognizes those different shapes all around your surgical paths and can link to it and readjust the navigation according to those landmarks.

Augmented Reality Signature Structures: Cortex – Tumor

[03:59 - 04:43]

Now, when you open the brain, you’re facing the cortex and the white matter and maybe the tumor. And you can see that there are differences in gray shades and actually those difference in gray shapes are very similar to what you can see on imaging, on T2 images or on different types of images. You can see that there is a shape and this shape is very similar to what you see in the operating field. And again, you can use those characters, shapes, to actually re-register at the millimetric level. Here there’s the cortex and here there’s some tumor. And you can see that you can map those two things.

Augmented Reality Signature Structures: Ventricles

[04:44 - 05:07]

If you go deeper, you end up with the ventricles. The ventricles are easy cavities to segment, so the blue image here is quite easy to obtain and when you open here the ventricle, you can see if your virtual ventricle is actually well adjusted with the real ventricle.

Augmented Reality Signature Structures: Fluorescence

[05:08 - 06:04]

This is an example of an AVM (Arteriovenous Malformation) where you can see the vessels of the AVM using GLOW800. You can have MID images of the image that was acquired prior to the operation and you can see that the green and the white here overlap quite well. The idea here again is to have a computer adjusting that, so that you have automatic tracking continuously during the operation.

Inside, when you operate on tumors, you can use 5-ALA (5-aminolevulinic acid) and 5-ALA is going to be fluorescent, and the edges of the fluorescence actually correspond to the contrast enhancement you have on imaging. So, if you segment the tumor prior to the surgery, you can actually overlay the image of the tumor with the fluorescence. And, again, there are edges, there are shapes you can re-register.

Structure – Function Association

[06:05 - 06:44]

That’s the example I just showed here. What we did in that case is that, before we removed more, we tested for the blue thing. The blue thing is actually the visual track, and we are able to stimulate the visual track and see if we have a visual response. And here we can actually monitor how close we are from this visual track. So we have a functional mapping we can link to the shape mapping. We were very carefully removing the latest, the small fluorescence remaining and we can see that, at the end, there is no fluorescence anymore and we preserved the optic tracts.

Avatar: Digital Twin

[06:45 - 08:32]

So now if you’re able continuously to register at the microscopic level, you can link many different things together. This is kind of a very fancy, funny and interesting concept: it’s the digital avatar. There is already a population of about 400 avatars existing, that are segmented individuals. Those avatars correspond to each of us, so we have our closest avatar. We can actually start with our baby avatar and grow all our life with our avatar. And every time we do imaging, we can capture our own data and get that own data recorded in the avatar, which allows us to bind the real data with the interpolated other data from your body.

Typically, when you want to look at vessels in the skull, you want to do flow simulation, you need to know how the flow is going to be out of the heart and how it’s going to be in the head. And the flow conditions are fixed by your carotid arteries. So, if you don’t know the carotid arteries because it’s out of the image, then you can extrapolate from the model, from the avatar.

Now once you have the aid of the image, you can register your microscope, you can register the picture you take with the microscope onto the imaging. You can record your operating field position, and everything you see in your operating field can be tracked back to the original reference image. And, whatever you do here, you can connect it to a position, so you reconnect all the data you acquire from a patient on a pixel base back to a reference object. That opens a lot of opportunities.

Some Examples

[08:33 - 18:25]

I’m now going to go through some examples. So that’s typically what we see, we have this blue segmentation of the face from the imaging, and we see how this blue segmentation is overlaying with the face. We can do it dynamically, up and down.

This is a posterior fossa, and this is to illustrate how you see it. When you see it through the microscope, the experience is different from what you see here because it’s designed not to disturb the surgeon. It’s very faint but when you look at it, you see that’s it’s very comfortable. So here there is the transverse sinus, there is the vertebral artery, there is the green point that is the place where we are going to open the arachnoid.


Here we adjust to the bone; you see that there is a hole here and there is this mastoid suture here. You defined it and segmented it prior to the surgery, but here we realign it so we’re sure that the sinus is going to be where it is. Now be careful on that image and this is quite important to understand: there is parallax. The sinus is actually below the skull, so it’s deeper.

So, if you change your trajectory, the projection on the surface of the bone is going to be changing. We draw here the sinus, and you see that this is shifted because we changed the trajectory of the microscope between the drawing and the picture. It’s very important to have a recorded trajectory where you align all your projections. As soon as you change from that, all your drawing is changing.


This is just prior to opening the dura. You see that we were able to really open very carefully and very close to the sinus. We were able to expose a little bit of the sinus to show that’s it’s really accurate. Now here, when you move, you can see yourself, you have brain shift. Everything is shifting so I’m going to help you a little bit. There is here a faint image of the vertebral artery and there is here the vertebral artery. You can see that and you can readjust again towards the vertebral artery.

Here you see this is the 11th nerve going in the jugular foramen and this is the jugular foramen. You see that this should be here, so you can shift it back, put it in the correct position and then you lock again in a very precise location.


That’s a little bit later, so now here, if you didn’t follow the surgery, it’s difficult to know what’s going on. It’s difficult to know the anatomy here. Now here it’s much easier to see the schwannoma, to see the auditory canal and to see the vestibular system. We use that to drill the bone on the top of the auditory canal and you see it’s quite close to the vestibular system. So, it’s quite nice to see it, but you see that it’s shifted. And this is because I don’t want to overlay at that time the tumor with what I have to drill. So, I shifted away to actually know what’s the distance between the edge of the bone and the vestibular system. And I know that this distance can be drilled without problems. So that’s what we do here.


This is another example where you see that here we didn’t have time to prepare. So, we used the map images to scan and see how well it is adjusted. We don’t have the face of the patient that has been segmented. We used the real raw data and we just scan through. And here we are able to see the tumor, and you see that the tumor is on the edge of the ventricle. This case is allowing me to show you all the steps of the surgery.

You see that there is a vessel, which we use as a signature vessel. And then you see in the depths here that there’s another vessel, and we use that other vessel again to re-register precisely. We know where the tumor is according to that vessel. The tumor is somewhere here and it’s a huge tumor. Here we see it again, we mark a little bit the cortex just to remember where we think the tumor is most likely on the surface.

And then we’re looking at the front edge, to make the contour of the whole tumor. So, we’re going on one side and on the other side of the gyrus and we define where is the best place to cut. And then we’ll cut the pineal and start removing the tumor which is below. So here you have different facility. You can scan and either you can see the whole shape in 3D volume rendering, or you can have the XYZ projections which allow us to understand the anatomy as we used to. You can project that on the upper side of your eye field, and here again, I quite like to use this 3D rendering, where the line here is showing what is in the plane of focus, and all the rest of the image is what is below focus.

And you can see that I feel and I see that there is a difference of color, and this much easier with the overlay, and you follow the overlay and you can actually very nicely dissect the tumor. You have a very good understanding of where it should be, and when you know where it should be, it’s like finding a needle. If you know where to look, you find a needle. If you don’t know where to look, you don’t find a needle. And here what you do is you go to 5-ALA, and what you see is that there is the 5-ALA and that the 5-ALA is adjusted with the contour of the segmentation. So again here, I was quite surprised. I didn’t think that 5-ALA was as enhancing as the contrast enhancement on imaging but actually, here, it adjusts very well. So, we can even use the florescence signal to readjust the navigation. As long as the shape is perfectly the same, we can assume that we’re looking at the same structure.

And here you see that again we can use this image MIP, which is the raw data and here you see the edge of the ventricle and you see that the edge of the ventricle is perfectly adjusted with the tumor. So, it’s not always perfectly adjusted, but you can readjust whenever you need to or whenever you want to. I think in the future a computer could be assisting and tracking all the time and keeping things correctly adjusted.


So here it’s another example where you see the fluorescence, and here we are just on the bottom of the tumor. And again, the bottom of the tumor, you know how deep you have to go, because you see where your tumor is and you can swap to the 5-ALA and again you can see the contour of the 5-ALA and the contour of the segmentation. You can see how well those are adjusted.

At the end, you can do imaging and you can compare the pre-op and the post-op imaging that are overlaid here. The pre-op is in red and the post-op is in gray. You can see that we were able to remove the tumor with a very good conformation according to the pre-operative image, which I think I would not be able to do without this assistance.


This is a case of an AVM: in red, there is the AVM nidus, in yellow there is an en-passant vessel which means that it is a vessel that goes around the AVM. There are some branches feeding the AVM but this also feeds normal tissue. Then you have here the sylvian artery, where here there is a feeder to the AVM and this is a proximal superficial branch. You can have an understanding of the AVM before you operate, before you open the skin.

This is the AVM and you see the GLOW going through. You see that the GLOW is popping up, and you can identify here the veins we saw before, we identified the artery, the en-passant artery here, and the green spots are the places where we decided we would put preventive clips. So here we put the preventive clips on both sides, and we see that the AVM is still getting green, because we know there is some fear from the deeps and there is actually one theater in the corner here which ends up in an aneurysm. And here what we do is we clip the aneurysm and we keep the theater alive, so we can check it again and we can check that our theory that those vessels are tubes interconnected is true.

So, we put a clip on one side and we look at the other side. If we put the ICG and we see the fluorescence coming, it means that we didn’t lock the right vessel. If it’s disappearing, it means that we locked the right vessel. What you see here is, at the end of the surgery, there’s the en-passant vessel that is still preserved. You see that the vein and the AVM are not receiving any contrast, so having the Augmented Reality and GLOW allowed us to remove the AVM preserving small en-passant vessels that it would have been difficult to visualize and preserve without.

That’s I think the main message I wanted to give and I hope you will use such tools; they are available on many different microscopes and on many different platforms. Thank you very much.”

Interested to know more?

Talk to our experts. We are happy to answer all your questions and concerns.

Contact Us

Do you prefer personal consulting?