Digital Prototyping 2. We're still in the designing XR experience's segment. I have talked about digital prototyping space already, where I was starting from digital authoring tools into immersive authoring tools where you work directly in AR or VR, rather than on a desktop computer. Then I said there's also web-based development, cross-platform development, and native development, which usually require programming. But people who are experienced with these tools would still consider them, at least portions of it, as Digital Prototyping. I want to talk about two specific tools here that we're actually using in this MOOC specialization. A-Frame as the example of web-based development, which is based on the WebXR standard. A-Frame is really a tool that I want you to play a little bit around with. You may actually like it if you have a web dev or web design background. Unity is unreal, a very powerful tool. It usually requires some time to actually learn it. Unity comes with an editor where you can do a lot of things, just like clicking and playing, and they actually do have good documentation. You can do quite a few things in Unity without actually having to program. That's why I've selected these two tools as examples that still fall into what I would call digital prototyping for AR and VR. Let's start with A-Frame. A-Frame is based on Three.js and WebGL. This is the technology stack, that actually is important information to some people out there. What it means is, within the HTML5 web standard, we actually have a new element that is called the Canvas. That Canvas is essentially like a graphics context that you can render 2D content into or 3D content. When we talk about WebGL, we actually usually mean rendering 3D content into the web. A-Frame also gives you a new set of HTML tags. All of these tags actually start with A dash, like A-Scene, A-Box, A-Cylinder. Lots of tags that are then translated to specific entities. We call them entities in 3D. You can actually create 3D scenes, but you're writing HTML code. A-Frame has an inspector. This is not an editor, keep that in mind. It allows you to inspect the scene, and look under the hood, and find out where the light is, and why this animation isn't working, for example. A-Frame has asset management. In this case, you bring in typical resources from the web like images, videos, and then also sounds. But then A-Frame also has a set for asset 3D model loaders. This is a good way of bringing in common 3D model file formats. I'll list out two more features of A-Frame, one is the ECS architecture. ECS stands for Entity-Component-System. That's something that we'll look into more detail later. It's really relevant for programming. Just keep in mind for now that you will find lots of components out there, the ECS, the Entity-Component-Systems. Lots of open source components out there that you can plug into your A-Frame code that give you access to VR controllers, that give you access to certain 3D models and certain behaviors. It's a very plug-and-play architecture. That's what I'd like you to think of A-Frame at the moment. It actually supports cross-platform, both AR and VR, cross-platform exile. Here, I have an example of a 360 photo that my former post-doc, Max, actually took in the lab. I can just zoom into different areas of this photo. It looks a little more normal. There we have, for example, the clock, and there we actually have some sketches. We were working on ProtoAR, XDAR, and 360MR at the time. 360MR then was called 360 anywhere, and these all became papers. If you look at this photo, it is actually a 360 photo of the lab. You can see some of these distortions. It's basically an accurate angular map, an accurate angular format. This photo was taken with a 360 camera, a Theta V from Ricoh. It actually has two fisheye lenses and then it stitches at the end. It stitches basically the photo together. I'm going to show you how we can use this photo for prototyping digitally with A-Frame and with Unity. Here we have the A-frame Scene, I'm using the 360 photo from before and I'm mapping it onto the sky, that allows me to basically look around already, then I'm also mapping it onto the sphere here, it's part of my examples. I'm trying to create this virtual reality, where there's this interesting, the curiously looking sphere here. The way I've done it is, I basely uploaded this photo that we took, the 360 photo and I linked it as an asset, I'm using CodePen here and it shows the code on the left, basically of the body of our HTML document. Then I'm using A-Frame, define my a-scene, link the asset, define the sphere and define the sky, in both cases, I'm linking the Lab as a texture, basically, my 360 photo becomes a texture of both the sky and the background, then our 3D object here at the foreground. The last thing I did was actually playing around with the material, really making sure, that this looks like interesting, by playing around busy madness and the roughness. I can adjust actually some of these here and actually preview it immediately in there and madness and roughness is always something that I think as a concept is clear. Basically they determined how metallic an object is and how rough it is and that then actually gives you a different reflection here. The light will be reflected differently. I understand that as a concept, but it's really hard to really get it right. Let's say I'm cool with this, I'm going to preview this in VR. Quickly, I have my Oculus, Rift S here. I'm just going put it on my head. I'm working in Firefox and I can see a preview. I'm happy with the composition of the scene. Maybe that sphere could be a little bit more at a different location, but that's something we can fix later. I can also adjust other parameters in the inspector. I can just go in here into the scene, edit my sphere little of it, maybe give it a bit more space, the code becomes less relevant right now because I'm using the inspector, and I'm going to go to the material and play a little bit just with the metalness and then also with the roughness. There's a lot of things we can play around with. Maybe I'm happy with this and I'll leave it like that, and go back to the scene. I could copy paste these values now, into the document here on the left, if I'm truly happy, but I can continue iterating preview of being in VR, going through that whole process, until I'm like really happy with how things look in VR. I have a combination of digital and Immersive prototyping, here using A-frame. Unity has been out there for a while now, it's actually growing out of the game development community. Unity was originally a game engine and it still is, but it has grown as a platform, with significant support for XR, both AR and VR. As a result, Unity nowadays has integrated support, from many XR platforms. For example, Vuforia, which is a marker based AR library, is now actually part of Unity, and ships with Unity. But you can also add lots of other SDKs into Unity. There is increasing support inside Unity, built into Unity, so that you have to install less on top of Unity. One of the things that I particularly like about Unity, is it's powerful 3D scene editor. There's actually a lot of things you can do in this editor. You get immediate previews. I'll show you just the basics in a few seconds, but imagine that you can actually create real experiences and edit them in 3D on a laptop, or desktop. You often get a sense of what it may look like in AR or VR. Although, most often you still need to deploy to these devices to actually figure it out, or use like an emulator. Unity also provides increasing support for emulators and has Asset Management and actually an Asset Store, which is different from other tools I've talked about so far, where you have to like one is the tool and the other is you would try to find 3D models on the web. With Unity, you actually do a lot through their Assets Store. You can actually download Libraries and add packages to Unity or find 3D models. Like I said with A-frame, Unity basically supports all kinds of AR, VR devices out there. Many vendors, if not all the vendors, provide a Unity SDK that you can download so we can develop against their Platform and hardware using Unity. All right, and if you have the same scene in Unity, basically working with a few objects, I have started AltaVista 360 photo, imported it and it created this fear and then mapped that photo onto the sphere and that generates this material. Now this material, I can play around with this a little bit more as I adjust some of the parameters like the maleness or this is more interesting, the smoothness. The smoothness gives this interesting effect of the light source being reflected, that's light and we can actually rotate that to change a little bit where the light essentially hits this sphere and can create a more interesting looking effect this way. Now one thing that I'm not exactly happy it is the position relative to the camera, so I can either move the camera a little bit to the right or move the sphere and little bit to the left, which I think that's what I'm going to do. I'm going to move it a little bit to the left. Now if I press play, we're using an XRRig already, so the camera has already been converted to the XRRig using Unity's XRRig plugin. I just press play and I have to oculus on my head. The rift as I'm using here. Looking at the sphere and I like the position, it gives a relatively good reflection of max, it's just interesting where it's floating. I'm happy with the result. We also mapped that 360 photo also into the environment, we basically changed the rendering settings here. There's a couple of ways I could do this, but I chose to do it this way. I basically replaced the normal Skybox which is this one with our 360 photo and that gave me that effect, not a big effect, but one thing we could do is we could further edit this material as well. For example, if I'm not exactly happy with the current rotation, I could rotate this a little bit differently, I can change the default rotation, but then I would also need to adjust the settings on the sphere, so I basically rotate the sphere. I could make it a little bit brighter if I wanted. Maybe that's cool. I'm going to try it one last time in VR and started prototyping this way. I'm pretty happy with what it looks like now. It's maybe a little overexposed. But that's fine, we can continue to play with this. The point of this last part is why we have prototyping using VR techniques? Basically you could be anywhere and then we port you into the lab, you might as well be in the lab and we were actually prototyping effectively an ARC. You can do this while being in the lab or you can do this anywhere based on a 360 photo that they previously took off their location. I think that is a very powerful way of prototyping XR applications very quickly and if you're good and skilled in Unity, which will dedicate a larger part of the development oriented course as part of this exam week we're going to dedicate that to being better at unity essentially and you can create those prototypes very quickly. I consider the Unity editor a powerful tool for prototyping as well.