Hi, my name is Emanuel Demetrescu, and this week I'm going to talk about the process involved in the creation of a digital cultural heritage, starting from a physical one. From a theoretical point of view, the digitization processes in archaeology did fit the reality of the cultural heritage, through some tools and the methods, we get a digital model, also called the theoretical model. The tools involved however, are able to collect just the specific aspects of the reality, such color for example, the digital camera, the geometry, for example, the laser scanner. All all the annotations, the text descriptions made by the archeologists, for examples, a tablet used during the survey and so on and so forth. For that reason we do not get a full copy of the reality, but we get just an approximation of the reality. So we can better talk about a selection of the aspects of reality. The selection of these aspects is made by specialists, and definitely is a, a human matter. So, in other words how can get a, a specialist, how can a specialist can decide which aspects of the cultural heritage, have to be collected ? In the archaeological approach, the focus is on the objectives of the research. And in the case of an ancient Greek pottery like this, we can choose to count the overall number of particles, or to focus on specific aspects like the style, like the typology, like the dimensions. While in the first case, we will not get results useful for the archaeological interpretation, in the second we can get some real stuff useful for the archaeological documentation. And we can document properly this pottery. So, we have to decide, what are the aspects we want to collect, taking in mind what is the purpose of the research. So, here the digitization steps in a nutshell. We need to show a purpose. So, the question is what we want to know and acquire for long lasting documentation about an archeological monument and object. And starting from here, we are able to define a digitization strategy. The so called planning of digitization strategy. We can choose all the tools involved, able to collect the aspects we want to include in the documentation. That is our scanner, the digital camera, just a textual annotation, and so on and so forth. In the end, we will get some structured data. The information is organized according to the archaeological questions. You will see in the next week with the professor Andrea Vitaletti, how to structure this information in an effective way, using metadata tools. An important aspect of the digitization process is the 3D survey on the field. With this process, we collect information about the overall shape of the monument, so a graphical representation, or text on information, and so on and so forth, using different tools, okay. Let's see the evolution of this 3D survey. At the very beginning, all the information were collected in an analogic, way and formats. So now, we have an enormous amount of this kind of data and we call them legacy data, to distinguish them. Here, all the data needed to be digitalized manually. And so, they were error-prone and time -onsuming. Nowadays, in more and more excavations around the world, a large, a large part of the information are collected directly in digital format. And here, we can talk about digital born data. Tools like tablets, laser scanners, or digital cameras, are used directly on the field, making better documentation steps. So, the 3D survey techniques can be divided into two branches. The ranged-based, the active sensors and the image based. In the first, the tools send a pulse to the archeological artifacts, and read the response coming back in terms of distance, colors, and so on and so forth. Examples of this tools are the laser scanners, the lidars and so on. In the image based approach, on the other hand, we have passive sensors that are only able to read from the reality, from the, the reality some specific aspects. And they record the natural emission of the wave of light, for example, the case of a digital camera. These 3D survey techniques, in the last years, changed a lot. In the 2000s years, the most precise and robust technology was definitely the range-based approach. But in the last years, and more in the last two or three years something changed a lot in the image based algorithms and approaches. So the 3D surface reconstruction capacity of the image based approach, has in several cases exceeded the range-based one. But, sure, these techniques have however pro and con. The pro of the range based approach: we can get an immediate result. We go in the field with our laser scanner, we took our point cloud. A point cloud is a representation in the space of, all the 3D measures we get with our instrumentation. And we get immediately this point cloud. We go on the laboratory and we obtain an immediate referenced output. I mean, I can, I am able to measure the distance, for example, between two columns. And the, the error, the error of these methods is somehow known. what, what does it mean? The manufacturer of my instrument a model of a laser scanner declare declares the overall precision accuracy of this instrument. But we have also some con. It's very expensive. We need several ten thousands of Euros and worst is expensive to scale. What does it mean? If I have to acquire a coin, a building or an entire environment, I have to use different tools, set up just for this purpose. And last but not least the texture is not so well supported. I mean, there are a lot of good laser scanner that are very accurate in the 3D color acquisition. But, they are just some of them, and they are very expensive. The image based approach has, on the other hand, a different pro. It's scalable, so you can use these techniques and tools for a coin, for a building, or for an entire environment, and this very useful. These are low cost equipment, so with some thousands Euros you can get a top level instrumentation, with some hundreds Euros you can obtain very good results and you can even work with your smart phone, and get some results. And this is what you will do in the next lesson. Okay. And last but not least you get very a good color information because we are starting for photos, and the color of our photos is the color of our final 3D model. But sure there are also some con. The problem con of the range based and image based are very complementary. So the con is that we need post processing. We don't get immediate result as in the laser scanner. So, what does it mean? I go in the field, I take my photos and I have to go in the laboratory and perform some algorithms some processes to get the 3D information from the photos. And we will get and we will see, how we'll get it. We get an unreferenced output, an unreferenced output means that I can't take measures on my 3D point cloud from post processing. How can I avoid this problem? When I am on the field I take some reference measures. For examples, the distance between two corners of two buildings, and so I use this measure to reference my 3D model and so I can re-scale it. And after this process, my model will be fine and I will be able to measure whatever I want inside my scene. The, all of the height of column and so on and so forth. So the last problem is not so important, but for good result is important. We need complementary skills, so you will get some result, sure, with your smartphone or digital camera in the next lesson. But, if you want to make something very good and very accurate, you need to know how to get very good photos. Okay? And you need to know a little bit deeper how works the algorithms. The algorithms and also how to correct, to make it better, a 3D model inside a 3D software. So, lot of skills. So. But we will use so the image based technique in this course, because it allows you to experiment in a convenient way, is not expensive and you are able to try and retry to make it better. So, easy, isn't it? But basically how does it work? I take several photos from different viewing angles, I process them in the software tools, getting a point cloud. And with this point cloud, I finally get my 3D model. Now, let's see some methodological implications. We'll see in practice the next lesson how to do it, but from a methodological point of view [COUGH], the archaeological record-keeping on the field changed a lot with in, introduction of these last techniques. A lot of possibilities are now present and we can now state that we are just at the beginning of an epoch of big changes. First of all, digital 3D record keeping enable a new way to check and share the archaeological excavation processes. The so called digital re-excavation. as said before, by the professor in the previous lessons the archaeological excavation is a destructive process, and we need to document it properly. The 3D survey allows a constant density of information and enables a digital re-excavation in laboratory. So another specialist in another part of the world is able to verify, check and re-excavate another excavation. So, are we revolutionizing the way we can communicate the investigation processes? Another methodological implication is the data integration. The 3D model, somehow is a box where to convey all the information we collect about our cultural heritage contest. In this example, it is even possible to integrate different survey technologies, obtaining a very rich 3D model. [NOISE] Last not but least, the 3D model can be segmented to fit the archaeological granularity. I mean your 3D model can be cutted in small elements then can be connected in an effective way to all the relevant information. In the example, you can see an ancient Roman ara a columnade in all the fundamental architectonic elements. Okay, in this lesson we have seen all the steps involved in the 3D survey I mean the processes to obtain a digital model of a real object, you go on the field, you take your photos or your laser scanner acquisition, you get a 3D model. And the implication from an archeological methodology point of view with these tools, you can re-excavate these 3D model. You can communicate in a convenient way this and you can cut, You can, you can segment these model to annotate it properly.