In this segment, we'll talk about the second major component of the Clinical IT Infrastructure, and this is a medical imaging setup and this is my own personal area of research so you may get a little bit details here just to reflect that. This is a history of major medical imaging modalities. Medical imaging starts in 1895 with the discovering of X-rays by Wilhelm Roentgen. You can see a picture of how radiologists look at X-rays, it will be printed out on film or developed on film originally and he will put it on these things called light boxes. This is what a light box is. This is a light box and you put the X-ray on top and radiologists would just stare at it and see if they could identify things. In about the 1950s we have the development of Nuclear Medicine Technology that's SPECT and PET. Here we inject trace radiation into the body and we see distribution, and that can be diagnostic for diseases, for example, we use radioactive glucose to measure metabolic activity in cancer. High activity areas with tumors often not will have high radiation accumulate there and you can see them as hotspots in the image. I'm sure you're all familiar with ultrasound. It comes along the '70s and this uses low energy sound waves, this is cheap, portable. It can be used at the bedside. In the '70s also, we have the development of X-ray computed tomography, X-ray CT. These are the first true 3D images. In X-ray CT, what happen is that if this is a patient where quite multiple X-rays all the way around the patient, and when you combine those electronically to create a full three-dimensional image. These were computers media imaging for the first time. Then in the 1980s you have the development of magnetic resonance imaging with a huge advances in able to measure soft tissue contrast, metabolic imaging and all the things that MRI can do. In the clinical space, the peak words you'll hear, this was PACS and DICOM and the databases. PACS, the Picture Archiving Communication System is the main imaging database of a hospital. In large hospital, this is petabytes of data, this is easily the biggest database that hospitals has. The PACS has revolutionized clinical radiology, and it permits remote viewing and readings instead of staring at the light box now people do it on their computers. We'd see a picture in a minute, and we'll have a demo from Dr. Robert in the next segment. DICOM now is the communication protocol used to interface with PACS. DICOM stands for Digital Imaging and Communications in Medicine, and it's a network standard that you need to use to access a PACS. This is a radiology snapshot from the viewer, so this is a digital image analysis on the computer, you can zoom in, you can change the contrast. For those of you interested to know more about DICOM, it's an excellent book by Pianykh, by the same name, Digital Imaging Communication in Medicine called DICOM, and you can learn about all this, about this protocol, and how it do works. Now in a hospital, when you come to the image, these are the components of an imaging study versus the patient, you as a patient and you're identified by patient ID, this is your ID, this is the number that taxes study for a database. Then the next thing level down is a study. It's an imaging session consisting of potentially many scans, we'll call them series, and this is typed by study Unique ID or UID here. Each time you lay down on a scanner, it's a new study. If you come in in one day, and you get a CT and MRI, there's two studies potentially. Each study consists of a number of series, so this is what a series of images. They are images of the head starting here, which is the mouth and working all the way to the top of the head here. This is a series of images, and each series is made up of, and we'll put the series at the bottom of instances. An instance is a single slice, so this is one instance, this is another instance, this is other instance, the instances make up series, the series make up studies, and of course all the studies are tagged to a patient. The reason we have multiple series is that often on imaging session we want to measure different things. We want to measure, for example, in MRI, we may have pre-contrast images and we inject contrast and we have a different set of images acquired after contrast. Those are two different series in a single study. A complex medical imaging exam may consist of a number of series here where in different parts or in different rotations or with different levels of contrast because we're interested in extracting different information. For example, people who go in for prostate cancer evaluation would typically get three images, one anatomical image, one is a diffusion set of images, and another is a post-contrast image. All those three together can be used to evaluate a patient likelihood of current prostate cancer. Now, image data, when we think about the images, at least in the crudest form, images are just a collection of numbers. They're called pixels if they are in 2D or voxels if they are 3D. Each of these little squares in there is a pixel, is a picture element. The intensity data, this colors that we see are representations of numbers in fact. If we zoom in on a piece of the image, this dark patch could be the number 0. Typically, images are shown dark to light. Small numbers are dark, large numbers are bright. That could be another 0, this could be 100, this could be 200. As far as a computer is concerned, these image are just simply collection of numbers. If they are in 3D, we have matrices of the form i, j, k where i, j, k are the axis in the image, so i equals 0, 1, 2, 3 along one direction, j in the other and k in the third dimension. Although often they are stored in 1D arrays, one slice and then one row at a time, this was called raster scan. This comes from the old television, so we'll store them in this old tele-fectively, and this is how old television sets was to scan through to show you images on the screen. There's this mapping function, the three-dimensional metrics I, i, j, k is mapped into data array using this type of equation, so this is the typical raster scan. Up to this point, these images are no different than images you get from a camera or any other kinds of images. But medical images are not just arrays of intensities. They present extra complications. There's significant metadata that defines things like the resolution and orientation, there may be intensity mapping functions implied in that, and I'll explain that in a second. It's important to understand before blindly trying, for example, to apply computer vision derived machine learning techniques. This is a significant point. There is a lot of intels in taking techniques derived for computer vision, particularly object recognition to apply them to the medical world. Now, there's a number of problems in that, one is the dosing is typically involve 2D images, medical images are often in 3D. The other is that the numbers in the images may mean different things in different times. The number 100 may need to be shifted and multiplied by the same constants to give you a true number, for example, in nuclear imaging of radiation counts. That number may be different from image-to-image. If you just look at the intensity table, this is not the whole information, you have to look into the header. The orientation and the resolution are also important because some images may be one millimeter resolution, some may be two millimeter. You can mix and match them blindly as well. The metadata in medical imaging is critical, unlike, for example, camera images, which for the most part are straightforward there. How do we specify the orientation of the image? Typically, we have three axis, we have an axis goes bottom to top, one goes front to back, one goes left to right, and this is a formal terminology you'll hear. You hear words like superior that means top and inferior this means the bottom, posterior that's the back and anterior, that's the front, and of course left and the right as before. If we see how we specify the orientation of the image, we use i, j, and k to indicate voxel index, so this is i_0, i_1, i_2, i_3, i_4, we just count the squares. We use x, y, z to indicate position in millimeters. For example, if the pixel are two millimeter apart, this may be 0, 0, 0, this may be 2, 0, 0 because it's two millimeters in x, y, z where it's 1, 0, 0 in i, j, k. To map i, j, k to x, y, z, we need the position of the first voxel, the circle that it just appeared, and then the distance, the spacing between voxels along each of the axis of the image. The spacing is usually called the image resolution or there are some special cases where it's not. The way we specify image orientations often with these three letters, for example LPS. LPS means that the i-axis goes from right to left, the letters is the two part. In the in-plane, j-axis goes from anterior to posterior, and the slice axis goes from inferior to superior. LPS is a typical orientation, require CT images in the clinic. If you're in research and you do fMRI, you'll see a lot of images that are an RAS, the image go to the right to the anterior to the superior, and this is 180 degrees rotated from the clinical standard. Understanding this orientation in this metadata and what all this means is important if you're going to analyze images. In the next segment, we're going to take a break from the technical aspects of this work. Dr. Rabanne would come in and she will give us a presentation of both electronic health records database and the PACS and show us how they analyze images and what they do in clinical radiology practice. Thank you.