[SOUND] Hi, this is Interactive Computer Graphics, week three. The topic we’ll discuss here is 3D geometric modeling. So the challenge behind this is the user interface is still mostly two-dimensional. So, the question is how to complement the missing information? So, mainly depths here. So user input 2D and how to get 3D from 2D. There's that’s a problem we will address here. Our solution, in general, is a user interface, an automatic inference leveraging domain specific knowledge. So if you do not know what you do, it's almost impossible to infer depths, but if you know what you want to do, if you limit the domain and you can get reasonable results. That's the basic idea behind the works I will introduce here. And here are the topics that we will discuss in this week. So first one is suggestive user interface for modeling of architectural models. And then there's sketch-based modeling of large and stuffed things. And then we discuss shape control by curves, and then flower modeling, and then volumetric texture synthesis. The first one is suggestive interface. So, the work was published as Chateau, a suggestive interface for 3D modeling. The motivation is here. So current graphical user interface is dominated through these many buttons, menus, and also even more, nested menus. It's very hard to find a desired command and also it’s very- it occupies a huge screen real estate so that’s a problem we might address So approach is instead of asking the user to explicitly specify the commands, we only ask the user to provide hints About the operation you want to do in the main screen, the main field. So suppose we were working on 3D modeling and if you want to do related to these three elements in the scene, userjust highlights these three which means a hint wants the computer to infer the request. And then computer looks at the hints, and then provides- propose, proposes related operations. In this case, the system proposes a box, and also other two possibilities. So, in other words, the user first give arguments for the command, and then system automatically infers the command. So let me show you a demo. So here is a three-dimensional modeling system with a 3D space inside, and then you can sketch lines in this way. But here, instead of using many commands on the menu. We simply allow the user to highlight lines. And then, for these highlights or hints, system predicts- presents some suggestions. One is drawing a plane and another is drawing triangle, the other is drawing a rectangle. And you can also ignore them and then click more. And then given the in-sweep of perpendicular lines, system predicts, or suggests to make a rectangle, a box. And then, again, if you draw a line here, then given this hint, system suggests to cut a box. You can also go back and then add more lines. And then given this hint, system suggests to cut a corner. So in this way, user provides the hints to the system and then the system presents suggestions. So, in this way, user does not require the specific command, in the huge command list, And then you can focus on the visual content of the task. And if you highlight this, and then system predicts, or suggests drawing plane, and the user draws for example rectangle, triangle here. And if you highlight these three, system suggests to make a face inside. And then if you highlight one more, the system suggests extrusion. And one interesting operation is like this one. If you highlight these two and also one line here, about a third. And the system exactly divide it into the three. Or user can also draw the line somewhere here, and then system suggest to divide it into four. So in this way you can generate this kind of diagram just by, repeating operations. [BLANK_AUDIO] And after some operations, you can get these kinds of results. And this kind of operation is very useful for repetitions, and also some symmetric structures and so on. And, let me describe, briefly describe, the implementation behind the, system. So, this is the input scene. So highlighted elements and then no highlighted elements. So, taking this example input scene then system is equipped with multiple suggestion engines, they are kind of parallel engines working independently and then each engine consists of two parts. One is examination part and one other is generator. And the examiner examines the scene, and then tests whether the scene matches with expected rule. And then if it passes examination, and then the generator generates the resulting scene. So this is a very hardcoded program, pre-program module provided the programmer. But they all works in parallel. And then specific engine reacts to the specific input configuration. And here's our list of suggestion engines. We input method in this application. One is drawing plane. Given a hint, system generates a drawing plane passing through the hint. And also if you highlight closed roof, it suggest polygon, and perpendicular two lines Suggests a rectangle, three lines suggest a box, and system suggests extrusion and pyramidal shape. And these are advanced ones like resizing or bridges, you know, connecting two- two polygons and the extrusion and this is a cutting out and also corner cutting and trimming and intersection, duplication, repetition, mirror image and so on. And the philosophy or concept defined is like this. So current standard user interface is like this. The user provides very explicit command or very explicit instruction to the computer. And then the computer just blindly, simply follows the instruction. However, what we propose is more friendly, human to human like interactions. So user is working on the visual task directly, and then computer implicitly observes the action and then provides help and suggestions, and I think this kind of working style can be very useful. One possible application is like this, so in your PowerPoint or in any presentation or drawing system, you have alignment show. But alignment too is hidden somewhere in the menu, and it's not easy to find. For example, in this PowerPoint, you can get it from somewhere like- Home, and somewhere, I don't know [LAUGH] height, alignment, and somewhere. But it's not easy to find. However, if you highlight these four almost aligned element, I think it's pretty reasonable, and also useful for the system suggest particular alignment, in align left or align center, and suggest to the user. And if the user likes it, then just click it, and it gets the result. And this can be a very powerful help for the user. So that's it and the original paper was published in 2001. And this work, our work is strongly inspired by a previous system called the SKETCH system. So this SKETCH system takes two dimensional gestures and the system will automatically generate this kind of three dimensional scene. So what they do is very interesting. So your input is 2D and you need to infer 3D position. To do it the system uses a rule that every object have to put on top of existing 3D object. So in this way, the system effectively, efficiently computes 3D depths of 2-Dimentional Gestures. Showing multiple candidates for the user to choose is also presented before, and the presented work is called Design Galleries and published in 1997. I also recommend you to take a look at this work. Thank you.