Hello everyone. Hi, this is George Giaglis. Welcome to week 9 of the University of Nicosia's free MOOC on NFTs and the Metaverse. Today's topic is trends in visualization technology. So we are in the second week of addressing metaverse related issues, and as we also did in the beginning of the course, before introducing NFTs, we discussed about Ethereum and underlying technologies providing a foundational infrastructure for non-fungible tokens. We are going to do the same with the metaverse. So today's lecture is going to be more technical in nature. We are going to be discussing recent trends in visualization. We are going to see augmented and virtual reality and try to capture how these things fit into the metaverse vision. And because this is a very specialized topic, I am honored to be joined by two colleagues who are experts in the space and will be covering the majority of the presentation today. So without further ado, let me introduce you to the first speaker who is none other than Chris Christou, a colleague of mine at the University of Nicosia, associate professor, and head of our VR lab. So Chris, the floor is yours. Thank you very much, George. Welcome. Welcome to the first part of this session. I am going to cover 3D rendering visualization and computer graphics in this part. And then my colleague George will, talk about its uses in virtual and augmented reality. Visualization comes in all forms. It is pervasive throughout our lives. It is used to render simulations of architecture of chemical reactions, crowd simulations, fluid dynamics. So computer graphics is pretty much everywhere. The origins of visualization come from, I guess, cave drawings, but more recently from architecture. So if somebody wanted to, for example, create a building, they would go to an architect and they would create some drawings for them. These drawings would be orthographic in nature to preserve parallel lines, to preserve the shape in order for it to be constructed correctly. And if they got it wrong, they would have to go back to the client, go back to the drawing board as it were. So this is a long drawn-up process. This has been replaced by computer-aided design or CAD. Everything is three-dimensional now. We can walk through a model. We can fly through a model of a building or a city long before it is even created. We can also simulate the lighting that is available in the building at a particular time of day, at a particular location. So things have changed an awful lot. Looking forward, we imagine that developments in haptics with auditory representation or faction even will mean that we don't have just visualization. We will have perceptualization sometime in the future. These are the enabling technologies that have helped us along. Primarily the hardware, the graphical processing unit, the GPU, that is in every device that everyone has in their pockets, in their mobile phones or in their computers. They can render millions upon millions of polygons per second. And these made it possible basically for virtual reality and augmented reality to happen. We have high-resolution displays and this includes the organic LEDs that we have in our VR devices. Computer vision, AI, machine learning, deep learning are all contributing now to developments in 3D graphics. So computer vision is responsible; it is the field where you study how to find structure in the world, whereas graphics is actually the process of rendering that structure and therefore, forming a happy collaboration. And then finally we have LiDAR and structure for motion. These are techniques of finding structure, of representing our real world and putting this into our computer model. I am going to talk about the history of graphics first of all and explain some of the processes that go into rendering computer graphics to give the viewers an idea of what computer graphics is. And then I will end with a few examples of very recent works. So a brief history of CGI, computer generated imagery. It was very much influenced by Edwin Catmull, Pat Hanrahan and Jim Blinn. Edwin Catmull was responsible, was one of the co-founders of Pixar which went on to create the short animation "Luxo Jr" which is available on YouTube even to this day. And this resulted in computer graphics being used throughout the movie industries and throughout entertainment. So these guys were also instrumental in the development of the GPU. And as I just mentioned, this is what's made everything possible on mobile device, how resolution displays on mobile devices, computer games on mobile devices and very great games on our PCs. So behind any graphics is the graphics rendering pipeline. So on the left, on the one side, you have your application. This is your computer game. This is your VR simulation, let's say. And you want to get the graphics from that app to the screen on the right hand side. Ok, so somewhere in there you've got the geometry, you've got whatever it is that's moving, the zombies that are chasing you. And you want to project that onto the screen. So that involves various stages of occlusion detection and seeing what is visible from the screen, working out the colors, etc. The rasterization process is the process of actually drawing something onto the screen. And most of this is done in a scanline order. So when we talk about scanline, we mean that we refer to the pixels of the screen being broken up into a rectangular grid. And we usually start at the top left hand side, we work to the right hand side and we do a zigzag all the way down to the bottom. This is how we get a 2D image. When we think of the graphics process itself, there is a virtual camera and there is our geometry. And wherever the virtual camera is, this is what we are projecting onto the screen. If we're talking about virtual reality, let's say an immersive headset, then this virtual camera is essentially controlled or moved by your head. So when you move your head, the virtual camera moves in the virtual environment. A lot of you have heard of ray tracing. So this is important for a little bit later in what I have to say. So I'll mention it here. So ray tracing, quite simply, is tracing rays from the eye through each of the pixels on our screen. And then if these rays don't hit anything, they don't intersect with any object in the scene, then we just paint the pixel black. If the ray goes through a pixel and it hits an object, in this case of the point X, then we have to calculate what color to paint the pixel. Now this color depends on the light. So it's a simple function of the surface orientation, the surface normal, as we call it at that point, and the angle that makes with the light source. And this is pretty intuitive. So if the surface is pointing towards the light, then it gets more energy as brighter. If it's pointing away from the light source, of course, it receives no illumination and it will be dark. So that's a very simplistic illumination model. So these illumination models made up the core of computer graphics research in the last, in the early stages of computer graphics. Okay, so these researchers were busy coming up with models of how to best represent the various effects that we have in the real world, the various shading effects that we have in the real world. One of the earliest models is the Phong model. And this can be explained by the diagram at the bottom left here. So we can break up the illumination of any object, in this case, this funny looking shape, into three components. The first one is the ambient component, which is light from everywhere. It adds nothing to the structure, not into the shading. It just ensures that the whole object is illuminated even though it's not facing the light source. The next component is the diffuse component or the immersion component. And this does, as we saw in the previous slide, this is orientation dependent and it adds the shading that you can see here. And the final component is the specularity, the shiny highlights that you get on glass and shiny surfaces. So there's a nice representation on the right-hand side where you can see that the process is not as straightforward as I have just described as you might imagine. We have refraction, we have reflection, we have different types of reflection, we have diffuse reflection, we have specular reflection. So coming up with an illumination model that actually captures all of this is hard, but the benefits are that you get towards our aim, which is photorealistic graphics, okay, and post-realism. Another complication is the fact that in the real world we have indirect illumination and this is nicely portrayed here. So in the image on the left we have a scene where there is no indirect illumination. On the right we have a scene which is rendered with global illumination. So let me describe what's going on. The shading patterns across this image are a function not just of the direct light sources. So this one from the window for example from here. There's also light bouncing off the floor onto the ceiling, bouncing back again. So all of this light that's bouncing around in the environment is causing this smooth shading that you can see, illumination of the ceiling essentially which has no direct light shining on it. So things are not as simple as we would hope in the real world. More about that later. So let me describe now just a content generation which is the stuff, the geometry, the stuff that's actually in our computer game that's in our television commercial. If it's 3D it's going to have been made in some 3D editor. This is the interface with 3D Studio Max. The first thing to note is that everything is polygonized. Everything consists of polygons. They use the flats; flat simple surfaces. We join them all together and not one by one but we join them all together to make curved surfaces. On the right you may see you may be able to make out that there are basic primitive shapes and boxes for example and spheres and these are used to create more complicated objects. You may also note the teapot and this is just the Utah Teapot. I've put a link there. It's a very special teapot that's been used for computer graphics research for the last 40 or so years. Characters, avatars, character modeling. There's no difference here. They still consist of polygons. The special thing about 3D characters or avatars is that they have a biped rig or a biped skeleton which is the actual thing that does the animation so, if we're talking about animated games for example, somebody has to create the animations and piece the animations together. This can be done with keyframing or it can be done with motion capture where a real actor performs the motions and then these motions are used to move the virtual character. At the bottom, just a brief mention about this, we can do crowd simulation. This is something from my own work. So we're simulating how annoyed people get with measuring how annoyed people get when they're surrounded by crowds. But you can also use it for escape route planning for example, to simulate what happens when there's a fire in a building. This is a multi-character scenario where we've got many non-player characters in the scene. Again these are no different from the character models that I mentioned previously. Probably just a glare resolution. We're looking at current trends now in the last few slides. LiDAR is used throughout for measuring distance for getting a structure or a special structure. The principle here is the same as an echo. It takes a while for light to, projected light to bounce back from surfaces. So we can measure the time that it takes for light to come back to an emitter. In this case it's available. We're on consumer devices like the iPad Pro. From this we get a pixelated version of the image in front of us. So a point map. And the point map can be turned into a depth map which is just an encoding of how far objects are away from us in relative depth. In turn this depth map can be turned into a structure, into a 3D model. This is used throughout the modern tech that you will hear about later on. So all of the devices that for example the meta-quest that uses this to work out where it is in the room and all the augmented reality and glasses use this to work out the surfaces on which to project their graphics. This is some exciting work that's done by Meta, formerly known as Facebook. So here they're actually getting the structure of human beings, of people. So people would go in here and they would have their head scanned or their body scanned. So this is a multi-camera rig and a multi-light rig just to ensure that there are no shadows. It's used to extract the structure of somebody's face, in this case, and also the textures of the face. And then deep learning can be used to reconstruct expressions that a device such as the Quest for example of the Future Quest text that the user is making. So if somebody is grimacing then their avatar will be grimacing in the metaverse. If you have a, so and light fields and talk about light fields, if you have a Steam Account or a HTC Vive or MetaQuest it's worth downloading Google's Welcome to Light Fields. So previously I was talking about getting the structure of the person, now you're getting the structure of the environment. And this is a wonderful demonstration of how realistic graphics can be. So what Google have done here is they've mounted a number of GoPro cameras onto this rotating rig and they're basically sampling the amount of light in the room. And as the rig is rotating around the calculating, they're sampling the light structure of that room, the so-called Light Field. Okay, if you store this you can play it back to somebody. And then the feeling is, is not just one of realism but it's also a way to capture the specular parts, the specular components of the surfaces in your scene, shiny surfaces, etc. Light Fields in particular, I would not mention too much about this as we need to progress. But if you want to capture the full extent of light in the scene, you really need to use a multi-camera grid. I'm showing in this middle diagram. You can also use a plenoptic camera. So in the Google case, they would use the multiple GoPro's which were rotating. But if you want a forward-facing camera, you can use a plenoptic camera, which as you can see takes small images of the same scene and you can put these together in order to get motion parallax, in order to get the shine from a glass, for example, in order to see a little world or represent the world more realistically. Now, if you had a limited number of these samples, you could use a Neural Network to actually calculate or to represent the space in between. So this is the principle of NeRFs. And this is the very recent paper by Benjamin Attal from this year. And this demonstrates, first of all, the power of occlusion effects in depth perception, the power of motion parallax, the realism and specularities, and just the overall structure. So these are novel views of just a limited number of samples of images. This is the basic principle of NeRFs. So the idea is that you take a small subset of images from an object and you create a volumetric representation using a convolutional neural network. This is another demonstration of this from Nvidia's websites, I put a link to this down below. So again, you have limited number of views and the fly through, which is generated by the neural network. So the thing that you might be thinking at this stage, and especially seeing the image on the right hand side is, "can we get an actual model out of this?". "Can we get a computer model out of this three-dimensional neural network representation?" And the answer is, Yes, and people are actually working on this. So again, this is very recent work by Munkberg taking Multi-view images representing these within a neural network and outputting a mesh, a three-dimensional mesh of the object as well as the textures that are used to paint the detail onto the surface, and also the light probes. So the light probes capture the specular components, and this can be output straightened to your favorite game engine, Unity, for example or Unreal or into a 3D editor where they can be further edited. Okay, so that brings me to the end of what I wanted to introduce you to in my segment. So, we'll pass over to my colleague, George, and he will take over. Thank you very much, Chris. That's some fascinating stuff here. So for those of you, the students, I mean, who are interested in this, and what George Koutitas is going to present in a while, let me tell you that we're working towards creating a follow-up course that will focus on the upcoming developments in terms of hardware and goggles and masks and headsets and all that stuff. There's some truly fascinating developments happening by some big companies and some startups. So we will revisit the space, post Christmas, and have a special day's course for those that are interested in seeing how our world will change. So Chris will stay with us until the end and he will be available for questions if you have any. But before we go to the Q&A let me introduce our other speaker for the day, my colleague and friend, George Koutitas. George is an executive entrepreneur and academic with more than a decade of experience in business and R&D. He has a multicultural background. He has spent six years in Austin, Texas, five years in the UK, and another six years in Greece, where he's currently based. He has founded a startup company in Austin working on AR and VR and training of first responders and has a number of publications and a patent in the AR and VR space. So it's a great pleasure for me to introduce him to the course. George, are you with us? Yes, hello everyone. Thank you, George, for the warm welcome. Hello, everyone. Can you hear me? I hope you can. George, thank you very much for your warm welcome. Just give me an indication that you can hear me so I can continue. Okay, all right. So I'm very happy to be here with you today and speak to you and introduce you to the concept of Extended Reality. Some of the things that we're going to discuss today, probably you might be already aware, but some of them might be new to you and might help you expand your horizons. So we all hear about VR, AR, Mixed Reality (MR). Let's understand the difference. Virtual Reality (VR), a simulated experience, okay, in a fully virtual world, and this is available to you through 3D near-eye displays. So you are fully isolated from the physical environment. You are in a totally virtual environment with graphics presented to you from a display in front of your eyes. On the other hand, Augmented Reality (AR) allows you to interact with the physical world and it overlays digital information and content on top of the physical world. Mixed Reality, which is now, you know, sometimes we use the same term augmented and Mixed Reality, is the ability of the digital content to interact with the physical environment. So this means that if you can see the third circle, the 3D graphic is behind the sofa and it is in the shadow region of the sofa. The sofa is a physical object in my living room and the robot is a digital content and I can partially see it. This is called Mixed Reality. Augmented Reality, an example of Augmented Reality was Google glasses, okay, or even our smartphones that we can have AR applications. Mixed Reality is more modern applications that can be made usually with Microsoft HoloLens and other AR devices. From now on, just to not get confused, AR, MR can be thought of almost the same. In order to experience AR, VR, we need to have a head-mounted device and as you already know, there is a plethora of devices in the market. The breakthrough in the HMD head-mounted device came from Palmer Luckey in a kick-starter project. This was in 2012, I think, but started the Oculus, okay, and there was an excitement there and then an angler of the technology because many developers used the development kit, DK1, offered by Oculus. So they were able to create applications in the VR space and people can access them through a marketplace. In the image here on the left hand, you can see some VR headsets. One is the Oculus Rift. You can see a cable because it required to be connected to the computer for some processing power. You can see the cardboard that you put your smartphone in order to act as the VR display and also you can see the latest versions of Meta Oculus Quest. On the right side, you can see some, a couple of examples of AR headsets. We have Google Glasses, Microsoft HoloLens and Magic Clip. You can see some of the content here. I'm not going to read out to you, obviously. I'm explaining in the images. Feel free to use the slides and, you know, dive a little bit deeper in the terms. So we have AR VR experiences deployed to us through head-mounted devices. What are the applications? There are numerous applications that we can experience. Both AR VR has a little bit of struggle in finding the key application areas. So we have seen VR going very deep in the gaming space. But then other application areas may involve learning and development, remote collaboration, social networks, or even industrial applications. AR is the same. But we will see as the time passes that VR is more like on the gaming aspect and remote collaboration and social networks, whereas AR can be used mainly for industrial manufacturing, construction applications, or learning and development because it allows us to interact with the physical environment. This is not 100% true. Obviously, we have VR applications since they're learning and development or industrial applications. But we have seen these separations on the application areas. And obviously, the reason is that AR allows you to interact with the physical world. Gaming is huge. By 2024, it's going to be 2.5 billions. Remote collaboration, we have companies like Spatial I.O. Agriculture, it is a recent trend in the integration of AR with Internet of Things. So we already have augmented reality startup companies that helps farmers personalize, actually optimize the quality of the growth of their fields by either deploying sensor networks and taking measurements of the humidity, etc., or by using smart cameras that allows to optimize where you need to put more water, etc. So there are very, very exciting applications. Learning and Development, as we will see a little bit later, VR and AR has a very important advantage compared to traditional, let's say, web training programs. It improves cognitive learning, but also muscle memory, because you're moving your hands, you can move in the environment and the brain can remember where items are positioned and what actions you need to do if it is related to a repetitive work. So very fascinating. Obviously, health, we have a lot of applications in the health sector, either in the training but also during operation. Manufacturing and industrial, you don't need to be an expert in order to do a repair. You can download the instructions and you can do the repair at the same time that you are actually doing the repair of a machinery or etc. We are not very far away of what we have seen in the movies that you can download something in your, not brain, on your AR device and execute it without being an expert in the field, similar to matrix. Architecture and construction, obviously there are numerous applications there. So the world is fascinating and AR VR will definitely be dominating in our lives and our work now and in the future. It's really interesting to see how this technology was evolved. The first HMD head-mount and device started in 1943. Yes, believe it or not, it's so old. You can see that there was a big gap. Obviously, you know, the technology was not there, user adoption was not there. Then suddenly in 1960 to 1969, there was a decade of people with the growth of the computers. They started experiencing different types of technologies in order to create immersive environments. The most exciting milestone was in 1962 with an immersive experience called Sensorama. If you see the video now, it's going to be funny, but for 1962 it was a breakthrough. You will see that there are waves, bursts, let's say, of evolutions and now we are in the time that the technology, the hardware, is there. We have portable devices with very great quality of experience and quality of the graphics. The time is now to exponentially grow the sector. In the VR space, there are a lot of companies that provide head-mounted devices. Obviously, one of the most well-known is Meta Oculus. They bought a company some years ago and they focused initially on the gaming aspect and they had more like a B2C approach, business to consumer. They addressed the consumer market and there was the first, let's say, exponential adoption of the device. Obviously, there are other companies out there like Google, HTC Vive, Samsung, etc. Remember, in the VR space, it all started with Sensorama. I highly recommend to see this video to understand how 60 years ago people created the first immersive experience. If we focus on one product, the most famous, let's say Oculus Quest, you will see that it started with a passive VR experience without any type of controllers so, it was more like visualization. Then we had the Oculus Rift that was connected to a PC in order to provide some required processing power. Then we had Go and Oculus Quest that work with battery and in a standalone manner, so you don't need to connect it to a computer. Then we had Meta Quest Pro that was recently announced that it reduces very cool features like mixed reality. You can see that in the front part of the display, there are cameras that allow you to perform gestures and you can use your actual hands, you don't need the joystick. The level of experience and the graphics has dramatically improved compared to different versions. This evolution is met in all companies. In the AR space, we have also a lot of companies that provide devices. The most famous one is Microsoft HoloLens, Magic Clip. Metac is still present in the AR space with what they call Spark AR. It's a platform that anyone can create AR experiences that are used on the mobile device. They don't have an AR headset yet, at least available in the market. Magic Leap is an important company to see because there was an initial hype back in 2014. I think they raised a lot of money for a huge amount of evaluation. The company was not ready to provide the product and didn't address the right niche market to penetrate in the market. That's why there was more like an idle mode for this startup company. But recently, we see a lot of motion and evolution coming from Magic Leap since they trimmed their business model to more like enterprise AR and use cases related to health. So we expect to see a lot of growth and a lot of cool new features from Magic Leap 2. Obviously, a recent trend is coming from the Metaverse. So, imagine we have AR, VR companies, we have computer graphics companies and now we have companies in the Metaverse space. Either by creating 3D environments, either by creating serious games and interactive environments like Roblox Corporation, Decentraland etc. It's going to be fascinating to see what type of collaborations, acquisitions or merges are going to happen between the AR, VR and the Metavors space. So I'm sure that in the next years we're going to see a lot of action in this space. But let's see what is happening inside a headset. What is inside? What type of electronics do they have? Obviously, these bullet points do not represent the entire technology but they can give you a good high level overview of what exists and what are the main components. This is the device from Meta, Meta Quest PRO. There are front cameras, depth cameras in order to understand proximity and gesture tracking. So you can put your hands in front of the cameras and by moving your fingers you can see your virtual hands moving with great accuracy. There are also high tracking sensors which are important especially when you do like social interaction with another person and the other person can see your eyes or by optimizing the graphics and the frame rate according to the place that you focus your eyes. There are devices called IMUs, Inertial Measurement Unit, accelerometers, orientation and other gravitational forces and include accelerometers, gyroscopes and magnetometers. They are used for you to accurately measure position of your hands or the rotation of your head. Time of Flight sensors in order to measure distance, imagine you are entering a room and this physical room that you enter can automatically become a virtual room in your virtual reality experience. So you need depth cameras and Time of Flight sensors to do that. There are processors, speakers, battery (obviously) and controllers. The controllers of the VR are quite interesting to observe because if you think of the user experience before VR you have controllers of game consoles. You use both your hands in one device but now in virtual reality you can actually physically move so you cannot have one controller for both of your hands. So the UX of every company out there was responsible to convert the controllers that we had in the gaming consoles to two separate controllers with additional sensory device on top of them accelerometer, gyroscopes in order to simulate the movement of our hands and provide the required user experience for us to interact and play our virtual reality games. And this graph shows how these two different companies created two different joysticks coming from the concept of the console joystick. But when you don't have a controller you need to have gestures in order for the device to understand where are your hands and what are the motions of your fingers. And this is achieved both in AR devices and in the AMV devices with the cameras that are in front of the headset. So these cameras have the required, let's say, algorithms that power the cameras to understand the motion of the fingers and according to the different type of motions that you do, you can interact with a virtual environment. So for example when you do in a Microsoft HoloLens this movement which is the movement called Bloom, the main menu appears. If you want to click you need to do this with your finger not this, this is the gesture. If you want to drag and drop something you click it and you drop it. So there are different types of gestures in order to allow you to interact with a virtual environment in an AR or in a VR equipment. What type of delivery mechanism and technologies do we have in order to experience AR VR? There are numerous let's go each one of them. Where they are where they are is a virtual reality experience but it is deployed on the browser of your laptop or your computer or wherever you want. Obviously you don't have all the nice features of VR it feels like you are playing a 3D game okay but it might be the right solution according to the program and the application area. So for example if you want to create a training program for students or people to get familiar with a space a WebVR might be the right place to deploy your experience because it's already available anywhere everybody has a browser, okay. It's very cheap, you don't need to buy any new equipment. On the other hand if you need to create a more immersive environment like a game or a more immersive training then you need a full VR experience and deploy your experience on the VR headset. Obviously the WebVR is cheap also the VR cardboard is cheap because the device the cardboard is almost for free it's already very cheap and you only need the smartphone. On the AR space you can deploy your AR application on a smartphone I'm sure that you all played Pokemon Go or I'm sure you're playing now AR games on your smartphone they can be deployed on smart glasses either Google glasses that, I'm not sure if who of you experienced that in the past, I tried them back in 2015. We now have car manufacturers having smart glasses in front of the wheel of the car in order to inform the driver about, you know, navigation or specific alerts and obviously we have AR headsets that you can deploy your AR applications like HoloLens, Magic Leap etc. So according to the application the level of immersion and the use case you have a plethora of delivery mechanisms delivery technologies for your AR VR experiences. Something that is interesting also in the AR VR space is Haptics. So in order to make the experience even more immersive we now have gloves that can have sensory devices in order to improve the overall experience, so vibration, so your finger vibration, on a suit that you are wearing. So imagine that you are playing let's say a game that you are giving a punch to the enemy and you can feel the punch on your chest or you are in a forest and you can see a bird flying or the bird is landing on your finger and you can feel it. So all this extra level of immersion is delivered to you through extra hardware equipment that are obviously in a tactile manner communicating really fast with the hair with a VR headset and you need to have some extra hardware to experience it. There are other levels of immersion in order to have a better VR experience this is an example one of the most famous is the Treadmill that allows you to run in VR this was one of the main drawbacks of VR compared to AR in AR you can move your hands but also you can move your body. In virtual reality you can move you cannot move your body you only have the joy stick in order to navigate in the environment with treadmills (VR treadmills) you are on top of a treadmill, you can run, you can do all the physical movements and these are translated as locomotion in the VR space. We also have Flying Simulators we can have Theme Parks etc. in order to increase the level of immersion. I'm not gonna spend too much time on two of the most, let's say, commonly used engines to create AR VR experiences, you know, I'm sure that you are all familiar with Unity and Unreal, both of them are engines that allows you to create a VR and AR experience. As a very general rule of thumb, Unreal Engine is most widely used in games, it has very good graphics, where as Unity has a lot of libraries that can help you if you want to create more like trainings and other type of VR experiences but obviously this is not a hard rule, it is quite commonly met out there. So if you are a startup and you want to create, let's say, not a higher resolution graphic VR experience but more related to training and learning and development, Unity might be the right tool because there are a lot of languages out there and libraries. If you want to create a very realistic game then probably Unreal Engine might be the right platform for you but obviously depends on, you know, the use case and the application. So now let's move to some of the development challenges that we face nowadays. How do you develop a VR experience. Most probably you are aware of agile development process, so let me give you some of the lessons I personally learned in my startup career. Creating a VR or an AR training or game, let's say experience, is time consuming and quite a difficult thing to do. This is because there is a plethora of platforms, there is a plethora of devices you can use, there is a plethora of different type of 3D objects and environments that you can create and in most of the cases that I have seen is that you don't know what really the customer wants, the user. So one of the most commonly we help you on your development of the experience is what we call Agile development process that helps you understand what the user needs and what are the challenges, explore the different alternatives you have, experiment and then materialize. And this is done through interative cycles with small cross-functional teams so instead of going and creating a monolithic game or experience that nobody's gonna use, try to make it adaptive and iterative. So we wanted to create a virtual reality training for first responders and this virtual reality training should be delivered in virtual reality, Oculus Quest and also a AR experience using Microsoft HoloLens. So we need to develop two products but we didn't even know what the user and the customer wanted. So for example and design thinking principles what we did is we created an MVP with 360 images or 360 videos and we use InstaVR as a platform to let users experience it. Very easy to do and to tell you the truth, the budget that you need is less than 500 dollars, let's say, or euros, or zero amount of money, you'd go in the place, you take 360 images and then you program in an InstaVR and experience. You give it to the users and you receive a feedback, I would like this feature, I don't like that, I would like to add another feature. So with this iterative process, you know, we started creating progressively experiences in VR and AR, we published that on a VR store and then we were able to scale it to a large number of users. I definitely want to give you this advice that, don't go and develop something big, focus on an MVP, MVP stands for a Minimum Viable Product, and follow agile principles, iterative work in order to, you know, step by step improve your model and your experience. If you want to see all the development stages of an AR and VR experience, you know, the most basic steps are the following. Create the theory environment, design/create the instructional design, let's say the series game and the experience behind it, create some special effects and immersion levels, you know, some special gestures, define what are going to be the analytics that you need to keep track in order to understand user engagement, package all of these in an application file and publish it on a marketplace. On every step there are a lot of questions that you need to answer, these are just a small tiny portion of the actual questions that exist out there but it gives you, let's say, an indication of what are the steps involved, what are the main, let's say, obstacles that you need to bypass. In reality, it's 10x of what you see here. Another development challenge is the avatar; who owns my avatar, what type of diversity we need to give to people. It needs to be customizable I want to have my face on the avatar, some other people want to be anonymized or wear sunglasses, so giving, creating an avatar is not a simple thing in modern AR and VR experiences and it is something that is gonna, we are gonna see a lot of innovation in the near future. Another challenge that we met in mainly in virtual reality is what we call motion sickness. It's an important drawback because, I personally experience it sometimes, because it doesn't let you experience the entire virtual reality game. After five minutes or ten minutes you might feel motion sickness and you might quit, abandon the game. It's quite interesting to see how motion sickness is created. So we have two sensors that detect motion in our body, one is our ear and the other is our eye. Inside our ear there are some tiny tiny tiny sensors that understand, you know, motion. Think of it like an accelerometer inside our ear okay. And obviously the eye detects motion through the visual. When we experience VR, what is happening is that the brain that is connected to our ear and our eye receives two signals that are opposite. The ear does not feel any type of motion and it sends a no signal motion to the brain, whereas the eye can see the motion because I can see, you know, motion in the virtual reality environment, cars are passing by, you know, I'm flying a plane and the brain does not know which of these two sensors to trust more because it has an equal trust to both of them; it trusts the ear, it trusts the eye. So in order to defend itself the brain sends a sickness signal to our stomach and this forces us to stop whatever we do that creates motion sickness to us. So recent trends now in head-mounted devices in VR headsets is that they're going to include a magnetic sensor, actually an actuator on the ear side in order to synchronize the motion that the eye detects, with an actuator on our ear in order to also detect a fake motion. So motion sickness is something that is not going to happen from now on in many of the new VR headsets. Another cool challenge that is happening is what we call teleportation. It's not like actual teleportation but it's very similar to what we have seen in Star Wars movie. The idea for teleportation is for me to be able to see a 3D full-scale avatar of the person that I'm communicating with. So imagine that I'm in my room, yeah, you are in your room and you can see my 3D body walking inside your room and delivering you this lecture. There are different types of technologies to do that either by transferring a large number of pixels in this 3D environment or by creating a 3D object and setting, putting a skin of how I look on top of it. Obviously there are different types of cameras and hardware equipment that need to be created. I'm not an expert about that but I definitely know that there are a lot of development challenges in that teleportation space. And before I close, another challenge is how we interact with all these huge networks of internet of things that are out there. Imagine that by 2025, or it might already be happening, you know non-human centric data, data that are coming from internet of things are gonna be larger than human centric data, data that the real human is creating. And one of the key problems that we face now is, how can I interact with all this big data? We have a dashboard on my tablet or my smartphone but it's too small. We have NLP natural language processing algorithms that I can speak to as smart device and have access to this big data, or I can interact with smart devices like this thermostat and I can see the data. But one of the most expected breakthroughs that is gonna appear is through the use of AR and VR. I'm gonna be able to visualize big data on the physical world by connecting AR applications with internet of things networks. So accessibility to data is gonna be an immersive experience to us instead of having, let's say, a flat screen in front of us. That's all on my side and obviously there is a list of conclusions that you can see in your slide and George, we can welcome questions and I hope you found the lecture interesting. Thank you very much. Thank you very much George thank you very much Chris this was a really packed session but at least for me because I watched it more as a student because I'm not an expert in these things I found it very fascinating. Just to let everyone know that this is quite a long presentation you might have noticed that it's more than 70 slides so we're gonna mint it and have it available for you to claim as an NFT as soon as possible and obviously both Chris and George will be available for questions offline as well on Viber or Twitter. So we have a couple of minutes, I think we can take a couple of questions. One question is, okay, people are naturally confused with acronyms so George you started by trying to explain the differences between AR VR and MR. A student is asking about XR which is Extended Reality. I guess I know the answer to that question but can you clarify the difference on how XR fits with the other acronyms and what everything is? Yeah, acronyms and abbreviations are always a big issue and sometimes there is an overlap. Extended reality, mixed reality, AR and VR. I think that we are gonna have more dominant, let's say, names focusing on VR everything that has to do without any type of interaction in the physical world so I'm totally isolated in a virtual experience and then XR, I think, in my personal opinion, you know, that will include all the rest. But this is something that, you know, we're gonna see different names probably coming in the near future. So me personally, I use VR AR some other people are using XR so it's up to you to use the name that you prefer. Chris, any comments on that? Chris might be able to provide. Well I use VR for everything. (laughter) Okay case is boy yeah. I personally like to keep it simple and I just say well it's, I think virtual reality is good enough if it's going to blend with, you know, but who's to say what real reality is anyway, So keep it simple. Virtual reality is fine, XR I have read papers which say just treat the X as a variable, just a placeholder. So in the X you can put whatever, augmented, you can put the glasses you can put immersive and whatever comes next, you know. So I'd rather not confuse people. I'd rather not confuse people and I would either just go with VR or go with what George just said, AR and VR are fine I mean it's good enough. Yeah I agree, I mean the "keep it simple" I think principle applies here. I'm probably older than everyone around here and I've been around in the early days of the internet, the early days of mobile, the early days of crypto and I've seen how acronyms are used and abused by consultants and vendors as they try to position their products and differentiate themselves from competition. So sometimes we get, you know, bombarded with different acronyms that mostly mean, if not completely the same, very similar things and tends to be confusing so yeah, I'm all in for simplicity. And, you know, as it happened with the internet, the things that have real value that the names will stick. Others like, you know, the intranets we have been discussing back then in the 90s or everything will just disappear from the foreground. Okay, another question Both of you, especially George I think, have mentioned a number of devices that are commercially available, announced, or in the process of being developed, and okay, I guess most of us know about Oculus and stuff like that, but you mentioned things like haptic interfaces or treadmills or this actuator in the year that will alleviate the symptoms of motion sickness. Can you give us, either of you, like a time horizon of when these things would hit the commercial market, when we would see them. I mean, are they available in the market now? Are we expecting them in 2023 or is it like a five year horizon thing? Christos, should I go first? Yes go. Okay, the technology is already here and obviously there is a supply and demand, you know, driver here so the more the demand is gonna grow from the end users, the technology will accelerate. We have seen cases where the technology accelerated so fast but the user adoption was not there and this, from the business perspective, is, you know, sometimes not very sustainable, but for the moment technology is here to deliver, you know, acceptable levels of immersion and experience so it can be engaging for the end user. So gloves that can improve, let's say, haptic VR, okay, or treadmills and they already exist. They might be hard to find because there is no mass production there are no games, you know, still yet out there to let you experience, you know, with the use of a haptic glove, you know, the level of immersion that you want so there is. The technology is here, the demand is coming so we are gonna see like a step-by-step growth. My personal sense is that, you know, 2023 we are gonna see much more evolution compared to 22 and more penetration of this type of technologies in our experiences. Yeah so yeah I tend to agree but if you if you ask me which one of the AR or VR is going to hit a use case or a use scenario quicker I think it's going to be augmented reality because of the, you know, not everybody as George mentioned some people really do not like the sense of isolation that you get from immersive tech, you know, and I've been working with the tech for quite a long time and yeah, you don't find me putting on my headset I'd like to kick back and watch a nice flat screen, but imagine this, you've got augmented reality glasses and you kick back and you turn your room into a living cinema. Now this is a use case, it flows with larger and larger TV screens for example that everywhere every Christmas you're buying a bigger TV set, well at some point you don't need to buy a TV set, okay you can have a shared experience with your family wearing a pair of glasses that you can take with you from one room to the next. There could be a market for this employment entertainment in terms of home entertainment. For business uses, absolutely, you know everything is there currently, it will get better. Haptics has fallen out it's fallen out a little bit because, you know, it's clunky, the technology is too clunky to be viable at the moment. I remember the the old haptic device was called the Phantom if anyone would like to go back to the 1990s the late 1990s early 2000s. So this was a little robot, you put your finger into it and you could feel you could feel stuff and play around with, you know, elastic effects etc. Now we have haptic gloves but I think, yeah, this will be a while taking off, I think it's a slow process. Let me let me take you a little bit further in the future then because I have a question that I really like from one of our students and the question is, what about brain computer interfaces? How far away is that do you think? I guess eventually we will tap directly into the optical part of the brain and bypass AR spectacles or goggles. Do you have any views on this? That's already here. Is it? Okay yeah yeah that's that's already here so okay. There is a principle, I have one just here in fact, it's a 32 channel BCI with a principal shell so you could you get the 3d model you could you can print it and you get a pack from a OpenBCI is the name of the company. It was a kick starter from a few years ago and the perfect use case is as a motion motion device. Yeah, so you can train, it picks up the skin currents on the head on the on the scalp you can train it on the on the motor the sensors of the head with repetitive movements and then you can associate those movements with movement in a virtual environment. And people have been doing that for a few years now. Yeah, and there's also it's used for paraplegics so if you have a case where you use it for a paraplegic in a wheelchair to move their wheelchair, and you can for sure take this straight away and put it into a virtual environment. Fascinating I didn't know we were so advanced in BCI. Next time I am in your lab, you need to show me this. George, any views on that? I think that one of the enablers of something really really interesting is gonna be 5G or you know 6G networks that will allow real-time 360 video transfer. And I remember that a couple of years ago when I broke my leg, all right, I said I would pay anything if I could click on a person on a map, let's say on the top of a mountain that he or she is snowboarding with a 360 camera on her head, okay, and I can be with my broken leg in my sofa of my home in Greece and wear my VR headset and you have the assert experience real-time, 360 high-definition video and the person is, you know, doing a nice downhill run for me while I can't, So I think that when we are gonna have content creators real-time, high-definition, 360 video be able to be transferred and headsets that can allow us to, you know, consume this type of content, there's gonna be a really really interesting application area, We are a little bit some years behind because the network and the speed is not already there in many cases, in some you know denser burn environments it is, but I think this is gonna be fascinating and obviously we need the 360 cameras in our smartphones. Awesome, very very interesting. One student is asking, what was the name of the company that you mentioned, Chris, I think it was OpenBCI, you said? (you're muted, you're muted I think) OpenBCI, BCI for brain computer interface great, yeah okay. Another question is, if you were to pick like one or the top difficulty either technical or adoption related or regulatory or whatever you want to make these things, you know, commercially viable and adopted by en masse, what do you think that the biggest obstacle or obstacles are at the moment? Is it that we are, you know, missing technological elements, is it that we miss applications, that we miss education, what is it that hasn't allowed augmented or virtual reality to reach their full potential? I think there are different use cases for each of them. I believe and, well, I saw a breakdown of 60 to 40 on Meta's expenditure vis-a-vis the augmented reality expenditure versus virtual reality and I don't think it's the case that Meta, for example, is building this closed world in the evening you know they expect this closed world this is not going to be the use case it's not it's not going to be the, you know, what's going to break it for for this tech. I think this tech will gradually become pervasive through everything that we do, it's a slow process and I don't think that there is going to be a massive jump, in my personal opinion. I think what will happen is that we will just, I saw a visualization of this itself where somebody walks out of their living room and they are bombarded with augmentation, right, so in the streets where they walk there's information, this information regarding the street name there's advertising, once the advertisers get in there, oh believe me things will take off people. I also saw in our cafeteria today a pair of glasses which, you know, stop the glare from a screen and I think the wearing of glasses like this with a form factor like this with computer graphics augmented is going to be the clincher. People will start wearing these, they'll feel comfortable wearing them and we will have, everywhere we go there will be data, it will be data rich and this will be the metaverse, in my opinion. Very interesting, very interesting definition of the metaverse as well. George, any final thoughts on this because I think this is this is our last question for today. I agree that, you know, one of the key obstacles, you know, to wear the glasses is because now we send all the processing power on top of the glass. What we need is a glass that it is like the one that we wear for our sun or, you know, to improve our sight. So having migrating all the processing power to another device, probably our smartphone, our, you know, our smart watch and having the acceptable level of graphics and experience delivered in a normal glass will open new horizons in the adoption of the services and then, you know, wherever you are you can see augmented information everywhere, Advertising, you know, real time information navigation, everything can make your life much easier. Fantastic we're gonna say we're gonna see huge changes in the coming years and I agree with Chris that they would happen gradually and then and then suddenly maybe when advertisers pick up on these and we have a sudden influx of of applications all around us and then we will be chasing the applications instead of them chasing us. Anyway, thank you very much this was a very fascinating session thank you for being here thank you for sharing your expertise with us and looking forward to seeing you again in one of our future courses. Thank you very much everyone, we'll see you next week with week 10. bye