π Add to Chrome β Itβs Free - YouTube Summarizer
Category: Medical Technology
Tags: GuidanceImagingNeurosurgerySoftwareTechnology
Entities: BrainlabCTDr. Ven Katar RamanIENImageGuardMRINeurosurgeryProstate BiopsyYale
00:00
In this segment, we'll introduce our example project of imageguard and neurosurgery and we will use this project to anchor our discussion of user needs, system requirements of software
00:15
design over the next two three weeks. So this is an anchoring project to make things a little more concrete because we're going to talk about user needs and figuring out your user needs.
But sometimes having a concrete project make things clearer. Part of the reason for selecting this project is that this is
00:30
an area I've had some experiences. This is a picture of me from about 15 years ago in the operating room and this is some of our early work.
This from 2009 to design software in the operating room. And this is a particular piece of work that we designed a network interface that allowed a research piece
00:47
of a research computer to talk to the system that was actually running the surgery over a network protocol to get data out in real time. So we could do research visualizations and other measurements without actually disrupting the process of surgery.
So the surgeon used the system on the left. This was
01:02
the clear system and we use the system on the right to do additional measurements for things like brain deformation and visualization and all kinds of other projects. So here is a general procedure for image guarded neurosurgery and the device you want to have in mind as we think about this is a
01:17
GPS navigation thing for a car. This is a GPS system to guide brain surgery.
So it has three phases. We first acquire pre-procedure images.
The patients get image with MRI or CT or what have you. Then these images are aligned and analyzed and we create a plan.
So this
01:35
particular plan I used it for teaching for many years. It's a map from Connecticut to Indiana.
This is where my in-laws used to live and this was our Thanksgiving drive for many years. So this is our plan and this process is the planning phase.
So if you think about it in the GPS system the pre-procedure
01:51
images are the maps that you've acquired that the company that you got your GPS system whether it's Google maps or Apple maps or what have you the analy the analysis and the aligning has happened already. This is where you ask for directions and you get literally a set of things.
This is your planning and
02:07
then the next step is that we have to register the image to a patient. This is the equivalent of getting a GPS signal in your GPS navigation connecting to a satellite.
So what does this involve? It involves mapping the physical world.
So this is a patient lying on a bed here and the image world. So if we touch on
02:23
the nose of the patient, we need some function that takes us to the nose of the patient, the MRI. And if you touch at the top of the head, it takes you to the top of the head.
And this is the registration process that we create a function that lets us map from the physical world to the image world to the virtual world. And this is the
02:39
initialization phase. And then there's a guidance phase.
So where we do real time tool tracking in the same way that we know where our car is when we drive. We do real time imaging if there are any reasons to image during the procedure.
We account for patient motion deformation. I'll show you an example at
02:54
the end when from the prostate case and we provide feedback to the surgeon. Where are you right now?
Where are you touching? So this is the guidance.
So this is the tptic of planning initialization and guidance that constitutes this process. Now, if you go into an operating room, and you've seen
03:09
this picture in week one, but we'll come back to it in a little bit more detail. You'll see here at Yale at least, a system that looks like this.
This is the image guide navigation system. It consists of the software that runs on a big touchcreen here.
It consists of infrared tracking cameras that will track where our tools are in the
03:25
physical world. Down below, this is a zoom up of a surgeon and a patient.
So, this is an exposed brain of a patient. This is a brain down here.
This is what a brain looks like. And our surgeon at this point is holding a surgical pointer.
This is what they used to do the initial navigation. And at the top
03:41
of the surgical pointer, there are the sensors. These are infrared sensors.
And the cameras now track the sensors in stereo. They're two cameras and they can triangulate to get their position in threedimensional space.
And based on that and the registration we explained about before, we can take the position
03:58
of the patient of the tool, the tip of the tool, and put it on an image so the surgeon knows where they are, where they're navigating. And this is how the process is initialized.
If you look at a zoom up of a screen and this is from the brain lab system that we used at Yale for many years and I think they still do. This is a zoom in picture and you
04:15
can see what the surgeon will see on the screen. It's a few years old.
So this is a CT images down the bottom. This is an MRI image at the top.
You can see the surgical tool crosshairs and you can see landmarks that have been predefined. So the surgeon can orient themselves as to what they're cutting effectively.
What is the procedure involved? Now just to
04:31
give you a second example of this type of procedure here is from an image cut prostate biopsies another area I've worked in a little bit over the years and again they were using IEN artis example we've heard from Dr. Ven Katar Raman who's a director of R&D from IEN in the previous lecture and we'll hear
04:47
from him again in the following week. So in image gall prostate biopsy is called fusion biopsy we acquire MRI images ahead of time and we use them to define where the prostate is and also where lesions are in the prostate that we want to biopsy and biopsy means we're going to put a needle and acquire a small
05:04
piece of tissue for analysis to see if there's cancer there. So before the procedure we identify the images we identify the lesions and we do semi-automated segmentation of the process.
So this orange line here, somebody actually does a semi-automated job. They use an algum to help them, but
05:19
you can imagine drawing lines around the prostate. So that's what happens before you go in.
Then you go into the room and a ultrasound probe is inserted through the rectum. I apologize for those you find this squeamish.
And what this results in is an ultrasound image of the
05:35
prostate. And again the urologist now in the procedure room will go ahead and do a semi-ottom segmentation of the prostate.
And on purpose we show the shapes are very different because the prostate deforms as you push on it with the ultrasound probe. It squashes.
It grows. It shrinks depending on how you
05:50
apply force. And we use these surfaces to do a surfacebased non-rigid registration which then allows us to map the MRI to the ultrasound and we take the lesion that is only visible in the MRI that we define ahead of time and we map it into the ultrasound image.
And
06:06
now we can use it to provide guidance for the urologist. So they can put a needle right down the middle of this thing to get a small piece of tissue to help diagnose cancer.
So this is a second example of this process and I like it because it illustrates also the process of
06:21
information compensation and motion and realtime imaging as well. So keep these pictures in mind.
We'll use them especially in neurosurgery case to identify all the steps in the process and we'll begin with that process in the next segment. Thank you.