We're still talking about designing XR experiences, but we're moving into digital prototyping space. What I'm going to do in this segment is really focus on, well, how do we take, for example, our storyboards that we created on paper or our physical prototypes, how do we take them to the next level in terms of fidelity? How do we take it into digital? I'm going to talk about a few techniques and digital prototyping after the point of actually doing it in immersive authoring using AR and VR devices. Let's watch a quick video of what I'm talking about. He is now moving around the object and capturing the Plato figure. The information will be live streamed back to the editor where the video is available immediately. He has now access to an additional resource in the collect pane, which he can preview in the capture pane on the right and could actually recapture if necessary. In this case, the designer is however satisfied, and all he needs to do is actually drag and drop the Plato model capture into his ARC. He's enabling the marker tracking. He can immediately preview the object above the marker. He disabled the digital marker representation, and now he's showcasing how ProAR renders perspective frames depending on camera movement. This gives him a first sense of the air interface he wanted to create with ProAR in this case in less than two minutes. Now this is just one way of prototyping things, and it's something that I've come up in research, and it's not exactly the kinds of tools that you have available as a product. More generally speaking, what does the XR tools landscape look like? As a little bit of a reminder, we obviously have digital and immersive authoring. We have existing tools here, and I'll just give some examples. Keep in mind that some of these examples are current examples, and we don't know yet whether the tool that I list here is still around tomorrow. We have digital and immersive authoring, we have web-based development. This is this whole idea of building on the WebXR specification, and then we have tools like A-frame, that I've talked about earlier. We can do cross-platform development approach. While popular choices are Unity and Unreal, these are these game engines that I'd talked about earlier, that are usually combined with some AR/VR software development kit or SDK from popular vendors like Oculus, Weave, Microsoft. You can also specifically target devices and do native development. This is the fourth approach, the fourth big category of tools. You could choose a particular SDK, like the Cardboard, Oculus or Weave SDK to develop for these specific types of platforms and devices. This is the overall landscape, and as you can tell, some of them actually require programming. Some of these tools like Unity, Unreal, with these software development kits are native development actually do require quite a lot of experience in programming background. So what are these approaches good for? I think the digital and immersive authoring tools, that I want to introduce you to, are good for story boarding. But then there's limited support for interactions. You can do a few things, but you will very quickly realize there's not too much you can do after the initial prototyping. After that, you need to choose one of these three possible paths: through the web, cross-platform, or native. The web-based approach, I think is very good for XR apps that have some basic interactions. Sometimes, it still feels a little unfinished with current WebXR approaches if you compare it to a Unity or Unreal experience. This would be in the cross-platform development approach. These are actually good for full-fledged XR apps. They're really visually stunning. Sometimes, you can really create really nicely looking experiences there. But I also think the cost is pretty high. Learning these tools usually comes with a very high learning curve, depending on the path and their background. Remember I talked about [inaudible] being an XR creator. If you have a game dev background, Unity and Unreal feel very natural to you. But if you're like me and coming more from a web dev background, it'll be a little harder for you to pick up those tools quickly. Finally, and the native development approach is actually good for, again, full-fledged XR apps and exploiting platform specific or device specific features even. In some sense it's like the best experience you can create. But the issue is that you locked in with a particular device, platform or vendor, and that limits the experience, it limits the numbers of users that you can reach. Now let's really talk about just digital Immersive authoring so digital prototyping. I want to do it the same way as I did it for physical prototyping. Again, we're thinking about low fidelity to high fidelity, and keep in mind the digital prototyping should be closer to high fidelity, is like the experience should be closer to the final thing that we imagined. Then again, there are methods that I want to tell you about that are some of them are easier and some of them are harder. How does it actually work for digital prototyping? Digital authoring, which is a specific set of tools that allow you to create things through visual authoring. It's like using an editor, click and play in some sense. Then you can run these experience often directly on an AR device. For example, I show you some of these tools. Most of authoring it really means that it's working in AR or in VR, so the way you interact with the content at this level is quite different here, really immersed in VR or AR and the tools needed to translate a little bit. Imagine that with immersive authoring, you're really going inside VR or AR, putting on your headset, and then obviously the interactions change. You need to think about, oh, the user doesn't actually see the real world anymore. How do they interact with the objects in physical space? Immersive authoring is really, but you're really testing as you're designing it, so it's actually quite an interesting way of prototyping and I'll show you some of the tools in that space. In both cases, with digital authoring and immersive authoring use really subject to the limitations of what you can prototype with this tools. If you're moving into as the other ones that I talked about, the other approaches, web-based development, cross-platform development with Unity or Unreal, or native development specifically for some of the software development kits for some of the AR, VR devices that we have out there. They are moving slowly but definitely into that space where it becomes a lot harder to do. But you also can create a lot richer experiences, so you moving towards high fidelity, but also it is hard. Now, I wouldn't actually say that all of this is digital prototyping. When I use the word digital prototyping, I want to draw the boundary here around from digital authoring tools, these kick and play tools through most of authoring, doing things in AR or VR as opposed to just 3D on a laptop or desktop. Then web-based development and just scratching a little bit of cross-platform here because I think the Unity and Unreal editor, as long as you do not have to write your own scripts for interactions, I still think of them as a tool for prototyping. That's what I wanted to illustrate by drawing the circle and the boundaries that way. Now let's zoom into this circle of prototyping tools, and I want to start with digital authoring tools for VR. Unlike what you might expect, I've selected Amazon Sumerian and Unity, but just the Unity Editor part of it. I don't refer to unity as the bigger game engine and the whole platform with C-Sharp and scripting and all that. Just as a visual authoring tool, I think it is a very popular choice. A lot of people prototype directly in Unity. What does it mean? What can you do with these digital authoring tools, were most of them support visual authoring, so using a mouse and a keyboard and navigating with first person shooter experiences. There's WASD and those kinds of things. I'll give you an intro to that later. But they support a visual authoring of 3D scene graphs, so essentially hierarchies of 3D objects. Then they also do support VR previews, so often is just a matter of quickly connecting you a VR device or going through some deployment method to actually quickly deploy it to this quest here, and then previewing that thing that you just created and being happy if you did it right. But the workflow is really using the desktop, putting on a VR or AR device and testing it that way. I'm alluding to other tools here. Immersive authoring would give you the advantages that you do everything in AR or VR. But just staying with digital authoring tools, so you can do basic interactions without programming, you have usually access to basic interactions like the user looking at things or the user clicking things with a VR controller or touching things on their phone. Those things are usually available in these tools. You can also do somewhere advanced interactions. But this is hard. Sometimes you may add specific toolkits to it, for example, Unity has the XR Interactions Toolkit, while Microsoft is working on the Mixed Reality Toolkit. There is also a Virtual Reality Toolkit, VRTK. If you add these kinds of libraries to it, you may get away without having to rewrite your code. But to do advanced interactions, there's usually no way around writing code. The next group of tools is digital authoring tools for AR. How does AR differ? Well, there's a different set of tools out here and they differ and that they actually support visual authoring of either marker-based or marker-less AR apps. We'll learn more about what that means, but in the marker-based scenario, all you would do is actually use one of the markers that they usually have, and hold it up in front of the camera, and then use this for tracking. The way I would interact with it, is using my phone against the marker, and then prototyping this way. You can do this with the default markers that are provided like the astronaut I just showed you, or you can also create custom markers in later stages of the prototyping experience. I would often just use default markers to begin with, because they track really well, the SDKs work well with them, and then custom markers is something when you want to advance into the prototyping stage, and maybe building something more like a minimum viable product that gives it a specific theme. The last characteristic of digital authoring tools for AR is that they typically enable augmented reality previews, either through some software, and that you download onto your computer or have inbuilt into the prototyping tool like in the case of Lens Studio here. We call this an emulator or some sending it to device usually requires to connect the AR device, maybe a smartphone via USB, to the computer and quickly deploy to it for testing. As I was talking about these digital authoring tools, I was also pointing out some of their limitations and what makes it cumbersome, when it comes to actually working with ARVR. You often have to deploy two device to actually get a preview, because existing emulators or existing previews are often not good enough. This is where immersive authoring tools really have an advantage and play an important role. Again, I would distinguish between Immersive Authoring Tools for VR and AR, here I show you two for VR. The two examples I wanted to give here is Google Blocks and Oculus Quill. They're both immersive authoring tools for VR. Imagine that I've done these digital prototypes that you see here, I've done them while I was in a VR headset, and in one case, I was working with primitive, shapes and I was working on a chair in Google blocks, you may able to see it, but that was the idea. I tried to create a 3D model and because I needed that for my furniture placement application, and I wanted to do it immersively in VR. Even though the final experience maybe in AR, these immersive authoring tools for VR, are a good way of prototyping 3D objects that finally end up in an AR or VR experience, it really doesn't matter, but using VR to prototype. At the bottom you see Oculus Quill, which is more like a 3D sketching tool for VR, it's similar to a toothbrush, which you have seen in other segments of this specialization. What is interesting about Quill is that they actually move the entire authoring environment, reminds you of Flash or Macromedia Director, if you've ever worked with these tools, where you can do a key frame based animations, and you can actually do the same in VR. That is something I still have to learn a little bit more,it's actually quite tedious to use the tools in VR. But in principle, these immersive authoring tools for VR enable visual authoring, but in 3D, you do it in VR. That's quite different from digital authoring in itself. How is it different? Well, they make it possible to edit while at the same time you're previewing it in VR, so the whole time you are in VR, and that is quite a different experience. I would say that the overall tool support of existing immersive authoring tools is really with a focus on 3D modeling. There is support for animation and some scripting and some of these tools. But really since you're doing it in VR, you really don't want to have a keyboard and write code. Stepwise animation, even if it's key frame based and intelligent in some sense, is still relatively tedious. I think these tools are best used for doing some kind of 3D modeling and previsualization, for example, of the 3D models you have later, or the environment that the user will be using and that's where these tools are really good. They typically actually support export to common 3D model formats. This is actually quite useful. In fact, Google Blocks here that you can see. Actually, you can share directly to Google Poly, and in that case, you can find lots of 3D models online that other people have created with these immersive authoring tools. You can see them as inspiration or like to push yourselves a little bit. There are really people who are really skilled with it. Sketchfab, is an online platform where people actually share the 3D models for free, some of which may have been creative with some of these immersive authoring tools. Often however, they actually use professional 3D modeling software. Google Sketch or 3D Warehouse is the other platform that is often used for sharing these models. Let's look at Immersive Authoring Tools for AR. There is set off towards emerging, for example, Apple's Reality Composer that you can run directly on your iPad or Adobe Aero. What do you do with many of them as you use your tablet as your viewfinder, a tablet that has AR capabilities? Then you can also do immersive authoring, you're not as immersed as in VR, but you can actually make use of physical space and walk around and zoom these objects here that I'll show you on the left. I did the same example in both of these, I just actually placed a 3D model in my office. Apples Reality Composer is one example, Adobe Aero is another. What do these tools actually do? Well, they enable visual authoring of 3D content in augmented reality. They make it possible for you to edit these experiences as you are creating them, and you're previewing them at the same time. You don't actually have to do something on a laptop or desktop and then deploy to a devise, know you're working directly on these AR devices. In this case, you also design specifically usually for the environment that you're in. The examples I showed here on the left, were not particularly tied to these specific environment, I just place cubes in there. Often when you do immersive authoring with these tools, you really want to do it in the environment that you're designing for, when you're using these tools for AR. The existing tools provide relatively basic support for interactive behaviors, any advanced fancy gestures or any phone. Let's just use this speech command or whatnot. Those things would still require programming. There's often not support for that, but you do have location-based triggers, for example, as you're going towards an object or clicking a virtual object, or bringing some marker into the view, all of these things would be supported, usually without programming. Some of these tools also actually support and exporting to WebXR, so that you can actually then continue editing the experiences you have prototype this way, in a tool like A-Frame which we're using in this MOOC specializations. Here I'm showing Apple Reality Composer. I have a very simple scene [inaudible] object, a cube in there, that I can author digitally so I can rotate and scale actually both the wall but also the object. Rotation. Basically, I have access to all the transform tools here skating as well. This is just digitally, but what I actually want to do is immersive authoring. I'm actually going to go into AR. You're scanning the environment very quickly. That place is the object in the world. Now I can look at it and author it immersively. Here basically going to change the way it looks to match to maybe some interesting reflections as the properties. [inaudible] done quickly that you'll see there's a little bit better, how this might look, or we're going to go for this car paint look or aluminum. We can basically play around until we're happy with it. Not too much we can do with this simple 3D object. There are other ways of actually bringing in other 3D models, but I just wanted to show very quickly how we can do a massive authoring on an iPad and AR as well. This was a quick overview of the first set of tools in digital prototyping specifically as it relates to digital authoring tools and immersive authoring tools. In another video, I look into more detail into A-Frame and Unity as some of the other tools you may be using for digital prototyping as well.