The last topic of this week is volumetric textures. The work we introduce here is volumetric illustrations, designing 3D model with internal textures. So motivation is here. So we can see carefully designed three dimensional illustration that shows complicated internal structures. And these kinds of illustration is very difficult to generate, and also, it is especially difficult to make it possible to interactively draw this kind of illustration as you cut an object, so that's a goal. So the goal is to design “cuttable” 3D models. You have 3D geometry, and whenever you cut, you will get a beautiful cross section, that is our goal. And the problem is that it’s very difficult to do. Traditional, standard 3D models are just surface models, nothing inside, if you cut it is empty. If you want internal structure, you probably you need volumetric information However, volumetric information is very difficult to generate. You, essentially you have to paint individual box cell in x, y, z space one by one. specifying colors. It's very, too complicated. And you can also use CT scanners or MRI to scan but they just give you intensity values and there's no beautiful colors assigned and that's a problem we address and the basic idea is to use texture synthesis technique. So as soon as user cuts a model, currently nothing inside. But in the modeling phase, user- the designer specifies which texture elements to use to generate cross section. Using this information or hint system in the real time synthesizes realistic texture. So from the users point of view, it looks like there is an existing texture inside. That's the idea. Let me show you a video. [BLANK_AUDIO] So here's a system overview. So you provide input image and then as soon as a user cut a 3D model. Initially it's empty. But system synthesizes cross section automatically using texture synthesis technique. So let me first show you the user interface for browsing. So here's an input 3D shape, and the user cut the shape and the system instantly synthesizes the cross section and then you get this result. And if you cut in different way, you get different texture. So from users point of view, it looks like it has three dimensional dense textures. But from the systems point of view, it just paints the cross sections. Here's a cucumber, when you cut it, you get cross section everywhere. So it was browsing interface and now we show the modeling interface. So for modeling starts with the given 3D surface model, nothing inside just surface. And we will teach how to synthesize internal texture to the system, so user first cut, example cross section, and then currently nothing inside. And then user specifies, which texture type you will use, isotopic, orientation or layered, or oriented. Then if the user chooses layered texture, and then the system asks the user to specify example textures. Say drop image here and the user drag and drop meat photograph. And the user teaches which part of the image to use. This is the outside, this is the inside and you'll get this. And then also, user specify if there is a reference 3D volume and inside and outside. And you teach which part of the 3D model is inside or outside. With these correspondences, now the system has enough information, and then synthesizes the picture, so that’s your modeling. And then given this information, now we are able to use our cut in different locations, the systems synthesizes appropriate texture. [BLANK_AUDIO] In this system we supported three types, three types. So, first one is Isotropic, so there's an orientation. So, it's the easiest one, like sausages or sponge, potato and others. You just pick a type and then you drag and drop single texture, no orientation. And then system automatically starts, like this. The second one is layered structure, such as, carrots, or cakes, and others. And this is also a layered structure. And here is a select the layer texture, and then get an image, And as I said, as we've already seen, user specifies inside or outside. Or top and bottom and you also need to specify the same thing for the 3D geometry. Top and bottom. And now we know the correspondence between reference image and 3D geometry, and then system start to synthesize the texture of our chocolate cake. The same thing for carrot, user specify inside, outside and the inside, outside for 3D shape. And then system synthesizes the shape, texture. The final one is oriented texture, like this bamboo shape. Bamboo fibers has a specific orientation. So in that case user needs to specify the orientation of fibers. So user input is two dimensional texture of a horizontal cut and then you get three dimensional volume reference, and then user specifies the orientation. So here, user draws lines here and then orientation of here is automatically generated, and the system synthesizes the texture along the orientation. [BLANK_AUDIO] So it's a little bit difficult to see in this presentation, but I hope you see, horizontal long lines here and lots of dots here, and so. And here is an example, modeling example, so you have a three dimensional tooth model. At first initially nothing inside, and then you pick up a photograph, an illustration and then you put, specify relationships and then you will get a volumetric texture for a tooth. That's it. And here's a couple of results, like cucumber, or a donut, or bamboo, or tooth. And, as I said, modelling phase, user specifies how to paste the image to the 3D model on a cross section, by specifying outside, inside, and so on. And algorithm is like this. So for the reference image, user specifies inside and outside and then you compute a control map, kind of diffusion of the distance from inside. And then also you get the same control map for the cross-section. from the information or notation given to the 3-D shape. And then, given these two information system generates a synthesized cross section using textures synthesis algorithm. And let me briefly describe the texture synthesis algorithm. Texture synthesis is a very popular technique in computer graphics. And it generates a larger texture from input smaller texture. And original paper was published in 99, and the basic idea is very, very simple. So, for synthesizing a new texture from each individual pixel, you just searches for the similar context pixel in the reference. And if you find the most similar one, just get a color or the more similar pixel to the current frame. So, just by repeating this many times starting from random field you get very, very realistic texture. And this technique is very frequently used and recently, already used in commercial products. So yeah, so original paper was published in 2004 as Volumetric Illustrations and Texture Synthesis, lots of papers, but original paper was Texture Synthesis in ICCV, '99. And if you want to now the recent ones, a famous one is PatchMatch. Published at 2009, and this is currently used in Photoshop, and other applications. And there are a couple of more, recent 3D texture synthesis methods such as solid textures synthesis. And also layered- lapped solid textures so these may be interesting for you. Thank you. Okay so this is the end of 3D Modeling. We introduced suggestive interface, sketch-based modeling, shape control by curves, and volumetric textures and let me go back to the initial discussion. So the challenge in 3D modeling is how to complement missing information, mainly depth or z values. An approach is to design user interface, and automatic inference algorithm leveraging domain specific knowledge. So, in the examples we show today, you know, for the architecture modeling we used hard-coded rules for design for architecture models. And for low-tuned organic shapes, we used inflation algorithms, leveraging the flat such that the target is smooth. And from our modeling, we developed very specialized structure and geometry editors. And for cross sections, we used texture synthesis algorithm. yeah, thank you.