Transcript Learn@Lunch with Associate Professor John McGhee

Take a VR trip inside your body

12 September 2018

 

Introduction by Ross Harley, Dean, UNSW Art & Design

 

My name's Ross Harley. I'm the Dean of the Faculty of Art and Design at UNSW, and it's my great pleasure to be welcoming all of you here today. And of course, we always begin by acknowledging, especially in a site like this, that this is Gadigal country of the Eora Nations, and we play our respects to elders and custodians of this land in which we meet, both past, present, and into the future.

So just a couple of quick housekeeping items before we begin. So today's session is being recorded. So just wanted to let you all know that this will be a video that will be available to all of those who weren't able to make it today. So if you could also just reach in to your pocket and make sure that your phone is turned to silent or off. That would be really helpful. So the talk today goes for about 40 minutes, and then there's an opportunity for a Q and A, and I encourage you all to save up your questions and to have a conversation with our speaker, who today is Associate Professor John McGhee whose presentation today is going to take you on a virtual reality trip inside your body. And John's going to talk about how he uses design and virtual reality to provide new research insights for science and medicine. So please, join with me in welcoming John to the stage.

1:40 Learn@Lunch presentation by Associate Professor John McGhee

I'm just going to turn on this mic. Can everybody hear me at the back? Fantastic. Thank you, Ross.
Yeah, my name's John McGhee, and I'm an academic at UNSW, Art & Design in Paddington. And I am the Director of the 3D visualisation aesthetics lab. And what we do in that lab is we explore ways of visualising data and combining technologies and methodologies that you might see in the video games, film visual effects, and the design industries to achieve outputs that explore the full gamut of how we might represent data and make it meaningful to different types of audiences.

So I'm going to start off with cinema, because we all know what cinema is. We've all had a cinematic experience, and cinema has pretty much defined the 20th century. It's something we've all gone along to. We've gone with friends. We've gone with family. The lights have gone off and the projector's started and we've been transported to very different worlds. We've laughed. We've cried. And we've learned about what it means to be human. But I believe in the 21st century, that experience is going to be disrupted. The cinematic will still exist, but how we consume that experience is going to be quite different. And I believe that virtual reality headsets and head-mounted displays and the way we use VR is going to facilitate that disruption.

Now, how many people have used a virtual reality headset in the audience? Oh, wow. So most people in the audience. That's a good sign. Excellent. So the headset that you can see on my slide is the Oculus Rift, and that's one of the leading headsets in this second wave of VR headsets that's out there. And what it does is it allows you to experience an embodied experience of a 360 degree place that's very much synthetic, or being filmed and proxied through a virtual reality headset. You also have controllers that allow you to interact with the environment.

But VR's been around for quite a long time. I'm not going to go back and give you the full history of virtual reality. But I'm going to take you back 27 years ago, almost to the day. This was a video that was produced by a US TV network, and I want you just to listen. I'm going to use some of the things they say to drive a big chunk of my presentation.

4:10 video presentation

Speaker 3: Fly over Mars. Take a truck through a Prehistoric jungle. Tour a house that has not yet been built. It's called virtual reality and as Jane Chambers found out, all it takes is a special helmet and a glove and you're off.

Speaker 4: You're gone, John. You're history.

Speaker 3: It's a computer-generated world where you see and move and feel. Will real life ever be the same?

Speaker 5: From BBC news, with anchors Diane Sawyer in New York, Sam Dallason in Washington, Chief Correspondent Chris Wallace, Judd Rose, Jay Shaegler, and Sylvia Chase. This is Prime Time.

Speaker 6: It was more than 100 years ago that Lewis Carroll wrote about Alice's trip through the looking glass. Now that fiction has become a reality. Or you might say a virtual reality, because that's the name of a new computer technology that many believe will revolutionise the way we live. As Jay Shaegler explains, virtual reality lets you travel to places you've never been and see things you've never seen and do things you've never done without-

5:27 Learn@Lunch presentation resumes

Yeah, that's good. I think we've probably heard enough. What's interesting about that TV show that was made 27 years ago is in 2018, the press are still saying the same things about VR. Almost word for word. So the question I pose is, what's happened now in this second wave of VR? What's different between now and 27 years ago?

Well there are three major differences and they're all interconnected, and I just want to talk to you about them and I'll then move into some of the project work that we're doing at Art & Design. And the first difference is that graphics technology has moved on significantly since 1991. We have much more powerful graphics cards in our computers, in our video game systems, and even in our cars. And those systems are getting smaller and cheaper, and this is Lara Croft. So Lara Croft is a character from a video game. A video game that's been produced over several decades and it's now a franchise. And this is her in 1997 and this is her basically three years ago in 2015. And you can see the significant improvement in the graphics integrity, and these are real-time graphics. So when we say real-time, unlike a movie, this will have instant feedback as the character's moving around in the game.

And so, the second thing that's changed since 1991 is also smartphones and ubiquity of smartphones. And that's quite good that someone's phone's just gone off, because they are ubiquitous, right? On cue. So these phones are everywhere and the one thing that VR can benefit from that proliferation of smartphones is the screens. The screens in your smartphones are high resolution, and VR requires high resolution screens in the headsets.

And the third thing that relates to that is cost. So now that everybody has a smartphone and we have our video games industry generating better graphics, VR can now ride on that wave. So unlike 1991, the technology over-promised and under-delivered. In 2018, it doesn't have that problem anymore. The technology is cheap. A headset in 1991 could have cost anything from $20,000 USD to $100,000 USD. In 2018, you can buy an Oculus Rift for around $600 USD. You can buy an Oculus Go for $200 USD and a HTC Vive for around $1,000 USD. So we can see the changes that are happening in the tech and in the cost.

The other thing that's happened is social media. Social media wasn't around in 1991. Facebook, Instagram, all those things that we love ... Some of us, anyway. Are now around in 2018, and people like Mark Zuckerberg are big fans of VR, to the point where Mark Zuckerberg, who's the CEO of Facebook, owns Oculus Rift. And he's bankrolling a huge chunk of the technology roll out. And he claims he wants half of Facebook in a virtual reality headset experiencing the world's largest subscriber social media list. So that's one billion people. I would argue that's very ambitious. However, he has the money and he has the people to move that forward. I'd say a little bit more of a conservative estimate is that by 2020, there's probably going to be around 200 million VR headsets sold worldwide. So that's still a lot of headsets.

But what are we going to be using it for? Now, entertainment is one part of how VR headsets are going to be used. But I believe, and my team and academics at Art & Design believe that there could be so many other applications for VR, and designers are very well placed to try and uncover those and research them in the design field. And so, I'm going to move into some projects that we've been working on that challenge the boundaries of how we might use this technology.

And the first area is stroke. And I've been working with St Vincent's Hospital here in Sydney for the last three years on a project looking at how VR can change the way we deploy rehab in stroke. It was part of a serendipitous moment, as many innovations are, where I met the Director of the Rehab Unit at St Vincent's, which is literally half a kilometre from our faculty. And we met for coffee, and he said to me, "John, I've got three problems in stroke. I need to motivate my patients to do rehab, I need to lower their levels of anxiety through better education, and I need to prevent them having further strokes. If you've had a stroke, you're six times more likely to have another one. So could better communication and better education of why you've had a stroke actually have an impact on that rehab process?" And I said, "Well, listen. We're using this new technology, VR. Somehow we may be able to use VR and visualisation and design to make a difference in these three areas."

At that point, we didn't know how, but we started off a relationship in terms of research. And so, the first question we asked was ... Designers always ask really annoying questions. And the first question we asked was, "What do you use at the moment? How do you communicate stroke right now?" And he said, "Well, we talk to patients, of course. We give them literature. We point them on to the internet, and we also show them their brain scans." And I was like, right, okay, that's interesting. And so, show me brain scan.
So this is a brain scan of a stroke patient. It's an MRI scan of a brain. And the white areas within this scan, the high signal areas, are blood flow in the brain. And when you have a stroke, that blood flow becomes occluded and parts of the brain die. And so, having a scan done is a way that they can find out where the problem area is and whether they need to treat you through drugs or surgery or whatever.

The challenge with these types of images is they're fantastic for radiologists and people who have medical knowledge. But for people who are not from a medical background and haven't been trained in that field, they're encoded. They're difficult to understand and actually can be more confusing than not showing them. So basically we thought, "Well, how can we reconstruct these? How can we take these data sets and make them more accessible and bridge the gap?" Now, an engineer put his hand up in the room and said, "Listen, John, we've done this already. It's been done. Visualization's been done in radiography and radiology. Just leave it. It's fine." I said, "As a designer, I don't feel it's been done in a way that makes it accessible for a certain type of audience." And designers, we take this complex world, these complex technologies, we fuse them, we overlap them, we hack up tech and we output things that become more meaningful for different groups and say different things. And so, things may have been done before, and in many cases many technologies already exist, but as designers we can subvert those, change those, play with them, and hack them up into things that can actually be useful to other people.

And so, we gathered all the information from the technical team and we looked at all the 3D reconstruction software that already exists and turn it into 3D that can be used in communication. And I'm going to put up this slide, because I'm a big fan of this person. Not the skeleton, but the person who created this image. Does anybody know who created this image? Maybe? Yeah. Close. Not quite Leonardo. A little bit after Leonardo. This was created by one of the first Western anatomists, Andreas Vesalius. And he produced one of the first, if you can call it that, textbooks on anatomy education in the year 1543. A long time ago. But what I love about this image is it's a didactic image, it's an image that tells a story that can be useful for learning, but it's also an image that communicates the sense of mortality, because the skeleton's holding a spade, he's openly mourning his own death.

And so, how can we, in the 21st century, in VR and stroke rehab, take some of those concepts of fusing art and science back together again after the divorce of rationalism and separation, and I still think we're on that continuum, how can we come back and meet in the middle? Because particularly when we're trying to explain complexity, art, design, and medicine come together again in medical imaging visualisation. And so, we started to reconstruct data from brain scans and build them up in 3D using software techniques that are used in video games and animation 3D reconstruction and start to make images and animations. And so this is animation at play, and so this is a high quality render that we created as a movie. It wasn't a VR experience quite yet. It was just a movie that shows you how blood flows through the vessels. So we made the vessels transparent. We added particles. And the tubes that you're looking at are the vessels that feed the brain with blood. And this is something that I looked at back when I did my PhD back in the UK.

However, we wanted to go further. I was in the new world. I was in Australia. I wanted to do something much, much, much better and much bigger and change this into something that could be an immerse experience. Now, in sci-fi we've been there already. I find sci-fi the most fascinating space to find inspiration for how we do new work. Sometimes people think you're mad, but sometimes there's a grain of salt in that sci-fi thing that you've seen, that piece of literature or that movie, that can actually inspire and shape the future of how we use tech. And this is The Fantastic Voyage from 1966. It's an interesting movie, but the one thing that I took from it how can we walk inside our own bodies? Because the crew, these bionauts in this movie, they get shrunk down to the size of a grain of salt or sand and injected into a scientist's brain to save the day.

Now how could we change the paradigm of how we view data into something like this that we walk inside, we move inside an embodied experience of our blood vessels? We go on a virtual field trip with our doctors to see where and how and what happened as a more efficient way of educating and communicating with our professionals. So we started to do this. We brought on some post-docs and some researchers that had a background in video games and we started to reconstruct the data and put it into a video game engine so that it could be experienced in a VR headset. And this is one of the images that was created. This is a still from our VR experience. And you can see in this we've got the blood vessels, but we've also added the cinematic. We've added lighting, texture. We've added red blood cells and we've added the stroke. So we've created a hybrid of data visualisation. Part data visualisation and part design visualisation. Because we need to make a meaningful story so that the individual can understand what's going on. The data alone wasn't enough.

And so this is the walkthrough of this. This is the patient with the headset on actually looking around as they walk through, and they're going to follow the clot, in this case, as it moves through the system. And we added thing like ... We added a torch, which would seem really odd but we wanted to ... If you go inside a cave or you're moving through the landscape, you often have paraphernalia to help you. So we developed that for the interface. We developed a map. We also have this lighting. And so, as we move through the patient will follow these arrows and they'll get to the point where the stroke occurred.

So the tube that they're walking through is the actual scan data. But the colours we've added so they can make more sense of what they're looking at. And obviously the dynamic component. So the patient looks back and you'll see the clot as it moves towards them. And they actually move their head back, and unfortunately that's the point where the stroke occurred, where the occlusion occurs in the vessel. And so it becomes a much more visceral experience, something that's much more memorable. So rather than having a verbal explanation or a lecture about what a stroke is, you actually go to stroke like you would do on a field trip.

That comes with interface challenges. If you develop content where people move around in VR in a virtual space, you can't have them flying around because it makes them feel nauseous. So what we had to do is develop ways that they could walk in the vessel, but not fall over and not feel sick. So we developed these walkways, and this is an additional design feature in the data that allowed them to move to the point where they could see where the stroke occurred. This is another type of stroke. And we actually, you'll see in a second the arterial plaque is layered on top from the scan. It'll come in a second.

And so that's a lifetime of arterial plaque, which contributes to stroke because the vessels get narrower and narrower. And then if there's an issue then any particulate that might break off and lodge pretty quickly because the vessels are really constricted. And this is a bleed. So you'll see there's a bleed happening. There's a scab forming. So this has been added to the data. We didn't actually have that from the data. The dynamic temporal component, but we added this to help with the explanation. And you'll see the particulate break off from the scab.

And so we added other design features like we used an XBOX controller so the patient could navigate and move around, and that was really useful. They picked that up really quickly. So it's very intuitive. We also added things that allowed them to find their way back to where they started so they didn't get lost and disoriented. So these are digital breadcrumbs. So all these questions that the engineers were not asking, we were asking as designers. So this notion of just putting data straight into VR without any design is flawed, in my view. It's the overall packaging of the process and experience. Not just the individual data integrity component.
This is the last example that I'll show you in this project, and this is one where I have a video of a patient who talks about their experience. So this was a young gentleman who attended clinic at St Vincent's who'd had an aneurysm. So a bleed in his brain. So he had a hereditary issue, I believe, we think, and he had a large bulge in one of his blood vessels that had an inherent weakness, and unfortunately, that bulge burst and it bled into his brain and he effectively had a stroke. So I'm going to just show you a little video of Ian talking about that process and how he's been using the VR in his rehab.

20:22 video presentation

Speaker 7: So as you're pushing down, let that knee come out to the side.
Ian: I've been coming here for four years. Since I had the stroke.
I was 20 years old at the time. I was playing soccer. I don't remember the first three weeks of it. Basically, I had 13 percent chance of survival. I couldn't walk, couldn't move, couldn't talk. And thankfully it's all come back.

Speaker 9: Ian had a brain haemorrhage. We think it was an aneurysm. He surprised me with his moxie, with his toughness. Then I look inside myself and I think, "How can I help this person?"
Today we've asked Ian and his mother Lisa in for an interview. This is case number three in a case series that John and I have been working on. We're using gaming technology so that we can create a virtual world and show him directly how his stroke might have occurred.

John McGhee: We're going to give him an immerse, full 360 experience. We're actually going to make him and we'll let him feel like he's walking along his own arteries. We take the data from the MRI scans and the CT scans and we turn them into three dimensions.

Speaker 10: So we're just looking at a 3D reconstruction of the CT angiogram, when the dye is in the arteries.

Speaker 9: Can you just show me on this exactly where you think the aneurysm is most likely to have been?

Speaker 10: It'll be this vessel, going through the back.

Speaker 9: Okay. Righty-o.

Speaker 10: I'll just put this back.

Speaker 9: So this is the image that we'll need to give to the designer, John.

Speaker 10: Yes, we can do that.

Speaker 11: So this is the data that we've got sent through from Steven. I think this section here is roughly where the stroke occurred. So the aneurysm should be bulging out here, I think.

John McGhee: How do you think we can achieve that? To have the particles go out?

Speaker 11: Bursting out? If we have blood particles flying out as well as maybe parts of the aneurysm itself tearing away, I think that visually would be fairly effective.

John McGhee: Yeah, yeah.

Speaker 9: Ian is part of our trial looking at whether this immersible technology has any benefit over a standard rehabilitation education.

Now, there's a little area there. Can you see that little circle? We're gonna stop there. Just have a look at the roof. You'll see there's an out-pouch in there. So we believe that that's where your aneurysm would be. So I'm going to flick you outside your brain. And now, can you see the aneurysm? It's an out-pouching of that vessel right in front of you. And I'm gonna now press a button to show you exactly what happened at the moment of the aneurysm bursting, all right? So just keep looking and we'll do it now.

So the blood cells are pouring out into the brain matter itself, and what happens with the blood vessel is that it starts to spasm like that and eventually it starts closing itself off, and the areas that this blood vessel's supposed to supply will no longer get any blood.
Ian: I didn't know what had happened. So that was quite fulfilling for me. It was very cool. I enjoyed the experience.

John McGhee: So what the technology provides is a whole new platform. To our best knowledge, we don't know anyone else who's doing this in the world. Rather than looking at it through a screen or a window into their data, they can actually be inside their data. And we hope that you're going to engage in your rehab in a much more productive way.

23:55 Learn@Lunch presentation resumes

So that was screened on National Geographic. So that went global, that one. And we're really proud of that project, to the point where we were the first design oriented team to ever win this award, the St Vincent's Australia Health Award in Excellence and Innovation. And that project continues. We're now looking at developing another iteration of that in stroke and actually scale, because we just ran a pilot and we want to look at how we might scale that up in clinic and provide more evidence base that it's actually useful in rehab.

So I want to move you to a different scale. So we've been working at a macro scale in the brain and we've been looking at things that are physical and we can probably recognise them as blood vessels. I'm now going to move you down to a micro scale, and a nano scale. And this is a slightly different project. This isn't in the clinic. This is a pure research project. So we are part of a, what we call an Australian Research Council Centre of Excellence, and what a Centre of Excellence is is it brings lots of people across Australia. Mainly universities together to solve really big challenges in science and research. And we're part of one called the Convergent Bio Nano Science and Technology one, which in layman terms is we're looking at ways to deliver new types ... They're looking at ways of delivering new types of nano-based drugs. So the new versions of chemotherapy and drug treatment and scanning modalities will probably involve nanoparticles. So this Centre of Excellent is looking at ways of developing those nanoparticles and seeing if they make any clinical difference in the way we deliver types of chemotherapy, for instance.

However, there's a deficit in the Centre of Excellence, and the reason why they asked us to join is they have lots of data and they have lots of users that want to access that data, not just the scientists. And so, what we wanted to look at in the first phase of this project is, can design-led immersive visualisation of complex scientific data be used as an aid to engage and improve comprehensions and recall in education contexts? Now I will go on to say how we might be using it in science discovery, but initially, this first work package was about trying to take some of the complex data they generate in the centre and look at ways of providing tools for engagement and recall and education.

And so, the starting point for that process, a bit like the MRI project we did with St Vinnie's was to look at what kind of data they generate. So this is my colleague Rob Parton. He's based at the University of Queensland, and he works in this area of electro-microscopy and he takes lots and lots of detailed images of cell structures. They're so detailed that this machine on his desk, if it was to slice a human into little pieces, it would take 60 years to go through a human adult. That's how thin the slices are. And so, the images on my left here, on your left, was images of a metastasized cancer cell that he wanted to image and see more about what's happening internally within the structures. And so this is a migratory breast cancer cell, an MDA MB231 cell.

And what we wanted to do was take that cell structure, those two-dimensional image slices, and reconstruct them into 3D to explore how we might develop an engagement tool. And so, that was very technically challenging. We had to use what we call surface rendering to do that. So we generated these surface models of different aspects of the cell, and I'll just show you how we built those up. So the blue part is the nucleus of the cell, the part that controls the DNA, and that was taken from the data. The endosomes are there as they are problematic within the cell. They really want these drugs to arrive. And then we had the microtubials, which are ... Sorry, not the microtubials. The mitochondria, which are the battery of the cell. And then we have the cell membrane. And so we managed to get those into 3D, but what we wanted to do next was visualise them.

Now, like the last project, we started off with the TV and movie toolkit, the very linear approach to how we might get that message out. And so, we used high quality renderers such as Viri, and my colleague Andrew Lodger who's here today in the audience worked on developing an animation delivered from data. So that cell right in the middle of that image is that cell that you saw on Rob's microscope. We've reconstructed it. So we use that as the basis of creating this animation.

Now this animation serves a couple of purposes. One is it will probably help you understand some of the science that we're working with. But also, the complexity of the 3D models that we might want to move into VR. And I will talk about how we moved this into VR and we developed a ... I hasten to say solution. We developed an output prototype because it wasn't quite a solution.

28:45 video presentation

Speaker 12: Scientists are using cells isolated from human tumours to develop new ways to tackle cancer. Here we can see the cancer cells growing in the culture dish in the laboratory. For most drugs to be active, they must go inside the cancer cell. Scientists are developing tiny particles, nanoparticles that can be loaded with drugs. The nanoparticles can be modified so that they attach to the cancer cell and get carried inside. To make sure that the drug-loaded nanoparticles only attack the disease cells, they are engineered so that they only bind to the target cancer cells and not to neighbouring healthy cells. Once they bind to the cancer cell, they are taken in to the cell. They can't pass through the surface. Instead, part of the cell surface forms a pit and then it engulfs the nanoparticle.

Now we are looking inside the cell. Cellular proteins wrap around the vesicle and drive it inside the cell. You can now see a vesicle carrying the nanoparticle moving towards the very centre of the cell. The vesicle carrying the nanoparticle travels further inside the cell and then fuses with a compartment called an endosome. This is part of the digestive system of the cell, the cell's stomach. The inside is acidic and its job is to digest incoming material. The nanoparticles are degraded inside the endosome, releasing the drug into the cell. The drug can then kill the cancer cell. Nano particle-based drug delivery provides a way of attacking cancer without the side effects of conventional chemotherapy.

30:50 Learn@Lunch presentation resumes

So medical animation and medical visualisation are not a new thing, but we used it as a tool to start the process dialogue, develop some understanding of the science, and get out in that engagement space. But our true goal was to take some of the things we learned from creating that movie and actually bring it into VR. And so we had a headset called an HTC Vive headset, and that's slightly different from Oculus because it works in room-size VR. So if you can imagine, basically something the size of this stage that you can move around in and interact with, rather than just be sitting at a desk or a computer. And that was really important to give you the sense of embodiment, that you actually feel like you're in place. And so this is John Bailey in my lab who's working on that project. And you can see where we've marked out the arena for the VR.

However, the interesting part of it is that once you move away from building movies and you move into this VR space, you have to think it much more like an interactive experience where you have people in place. And so we had to think of it much more like a video game. And so we wanted to develop different types of levels. So we have this thing called the cell paddock and cathedral. So it's the same data, but we are moving around in a landscape and then moving around in a cathedral-like structure taken from the data. We also had to think about things like interface. So how you move around in the space with the controllers. So we added little maps that you could bring up; little points of interest that we'd annotate what was there, so the different types of ways that the nanoparticle would pass through the cell membrane. Also we had touch so when you touch things it would give you feedback on what you're touching. So you're a bit like a virtual rock collector or explorer. You're moving around touching things, you're navigating, because it is as complex as the real world is once you build up all these layers of the data.

And this is an example of someone moving around. There's also sound. Now, bear in mind, this is for an educational context so we wanted it to be engaging. That's the nano particle coming down. So totally different from the animation. And the user's decided to have a little look and see what's happening, and then they get an explanation of what's happening. The thing that we've done as well, we've made it like a landscape so that the user feels grounded. A number of reasons for that one is cyber sickness. You feel really sick really quickly in VR if you start shunting the user around too quickly. And so we allowed the users to teleport and move around in that environment.

And now they're going to move into the next level. And now they're inside the cell, they're looking around, they can touch things, they can explore that. That's the nucleus they're standing on from the data. And also little things like putting someone straight into data wasn't always the best move. Actually giving them context of why they were there and what the thing was. So we built a virtual version of the lab. So that's an exact replica of the microscope. So you have to walk over to that microscope and look into the lens to then go in to the data. So again, this is all about user experience.

We have been testing this. So we've been running some experiments with colleagues at Monash to see whether this actually makes any impact the way we learn. And so we tested 48 third-year biomedical pharmaceutical science students. They all received a lecture. 15 of the group were also given the VR experience for 15 minutes. And then we took some results from one particular question in their exam. So they really had to basically explain the mechanism of internalisation. And these were the results that we got. Now, bear in mind, they are a pilot group. There wasn't that many of them. But the results were really interesting.

So we had seen a significant difference between the students which saw VR over the non-VR exposed students. So we weighted the student's pref against their average mark for their exam. And so, we saw the VR students performing 5 percent better than average, and the non-VR students performing 35 percent worse than their average. So there's something going on there. Now, we're not quite sure what it is. Is it the VR? Is it the design? Is it the sound? Is it the colour? I suspect it's all of those things. And being someone who designs things more holistically, I like to think that it's a bit ... My analogy's a bit like an orchestra. It's not necessarily individual parts of the orchestra. While they're all important, it's how they come together to make the melody and sound that becomes significant. And I think VR experience is very much like that.

I won't go into this one in too much depth, but we're moving into the discovery zone, the science discovery world. We are looking at how those nanoparticles diffuse within models and trying to develop tools that would support the science team in how they discover and interrogate their data. So this is an in an animal model looking at developing interfaces in VR so they can see their data. And the big thing that VR allows you to do, it allows you to see things at scale. So normally they're looking at a screen that's maybe 20 inches wide, or whatever. But imagine that the data that they're looking at is the size of a bus or a small car, and they're in it in VR, looking at it and exploring it in the interface.

And we've developed some dashboard tools that allow them to interact and work with the data temporally. So they can look at where the drug's gone over time and see the change in the nanoparticles as they move through the model. And they can select different aspects of the model. And there's a whole bunch of tools within that that we've been developing. This is the work in progress, but it's taking everything that we've learned in some of the previous projects and putting them into these new types of discovery tools.
The other thing that's really fascinating about VR, and I think this is the area of VR, not just in medicine and health, but in all aspects of VR that will be the area that it excels in. And we now have platforms to do this. And it's the multi-user components, the social components. The part with Facebook, I'm most interested in, to be honest. The way that you can bring lots of people together in an environment that are not necessarily in the same room or the same country. And so, we've been jumping on the back of some of that stuff, using some of those tools. And so this is an experiment we ran in the lab. So we had two of the lab researchers, John and Mark, and their two avatars in a bunch of data, and they're talking to one another, they're sharing data. They're actually in the same room. I don't know why Mark's doing his arms out like that, but anyway. So they're just playing with some controls. And you'll see in a second, as it pans across you'll see John Bailey, who's also in the same data doing some stuff, and they're seeing one another.
Now why is that cool? Well John doesn't have to be in the same room. John could be in Adelaide. John could be in London. John could be in New York. And he could be logging in at the same time. And we see a couple of different avenues in that. One is science discovery. But the other aspect is when you teach a class. So say you wanted to bring 20 students in and talk about a data set, or talk about any concept for that matter, it could be really useful. And also in the patient consultation part.

And so, all the things we're learning in our research, we want to see it converge into these different outputs. So one is education. So as I said, we could have multi user virtual field trips in class. Many companies in the world now are moving in to that space and educational institutions, using VR as a teaching tool. The other area is this patient here that we're really interested in, which is, how can you fuse lots of your data together and have a virtual consultation with your clinician? That's another area. And the third one, which is harder, which is an area that we're moving into slowly is the discovery process. How can we assist collaboratively scientists working across different places, or designers for that matter. How can they collaborate and co-develop things in the VR space? And there's lots of design challenges and questions around that.

Our work has made an impact internationally. A lot of the VR work that we've been doing has been picked up on international stage, like New Scientist, Wired, Sydney Morning Herald, local Sydney Morning Herald, and also the Flagship Signs Show on ABC. So we are getting noticed and it is because we are bringing together and fusing these different aspects. Art, science, and data.
I'll talk about this very briefly. This is a very new project we're doing right now, which is building on the work that's been done at Stanford around alleviating the pain and anxiety in paediatrics using diversion techniques in VR. That's a really, really interesting project and it's having really good results. We're actually doing that now at St Vincent's in a different way. We're developing a little avatar dog. We're not sure of the name yet. Could be Vinnie, maybe. But that dog is basically something that you build empathy towards and resilience. So you have to train the dog to do certain tasks. It's not visualisation of data, but it is VR and it is immersive and you do a whole bunch of things. And we want to test if that can alleviate certain aspects of acute pain. As partly diversion, but partly as building a connection with a virtual object or a virtual thing, it's a dog.

So to finish up, this is my last slide. What I wanted to finish on is a little bit of a ... I hate the word manifesto, but a little bit of my view of where things are going to go and how we're responding to it in Art & Design. So we're creating these virtual worlds in VR. There's going to be a huge amount of content required. A huge amount of visualisation. And effectively, we're building these other places. However, as we build those other places, designers need to be part of the party. We can't be brought in at the end. We need to be integral in the process of building these virtual places.

And I know Ross is very passionate about this, and the new programmes that we're developing at Art & Design, the new research labs and the collective fire power that we have in terms of knowledge base, evidence base, and innovation. And we need people to design these worlds, and we have to make them. They don't actually exist in the world right now. There's lots of different disciplines coming together, but we have to develop them an education and in research. And UNSW is very well placed to do that. Particularly in Art & Design. And I'd just like to thank all my collaborators, because we are one of these dream teams that have come together from all these disciplines to make that type of content. And we come from all different disciplines. Chemistry, biological science, digital media, arts, video games, and so forth. Thank you.

41:30 Q&A

Ross Harley: Thanks very much, John. What a fantastic voyage we've all been on for the last 40, 45 minutes. This is the opportunity for anyone who has a question in the audience to engage in a discussion. So while I'm waiting for the hands. So if you just put your hand up and you can go straight to the question. So we have one here on my right first, and then we'll go up the back.

Speaker 13: Yeah. You're looking at using VR for education, but for memory. How you're engaging haptic experience, smell, sound, touch, so that the brain actually processes it and remembers it. And is there a problem for younger children for them to have a brain connection if you take away all those other cognitive experiences and just have a visual?

John McGhee: Yes. So I'll maybe answer that question in two parts. One is other aspects and other senses. There's lots of researchers internationally looking at how we can add other things into VR. So like you said haptics are one. There's also people looking at smell-o-vision, looking at bringing in odours to basically support an experience. Somebody called this, like they're called perceptual bindings. The more we have of these, the more that our brain thinks we are in place. So in a lot of our experience is just about enough to convince your brain that you're in a cell or you're in an environment where you're quite happy. As we add more of those in, more of those perceptual bindings, I think they're known as, you'll actually have probably a much more visceral experience.
I think the challenge will be how far do we go, and how long do we want to spend in those virtual places? Those are two things that I don't think have been answered yet. I think if you're in these spaces too long there could be issues. But I think they're trying to look at how they alleviate some of those things.

In terms of the question related to children, it's interesting. The VR headset manufacturers have suggested that any child under 12 shouldn't really be in a VR headset. Despite the work at Stanford where they're doing their work on alleviating pain when they change burns dressings to use VR as a distraction technique. I think I read somewhere that the only reason they've done that is because that's the age that Facebook don't want you on Facebook. I don't know. I mean, I think there's some evidence to say that your ocular system's still developing and so if there's too much time in a VR headset at that developmental stage, there could be some structure damage. So I think the jury's still out on a lot of those things. I don't think I'm as well placed as others in the world to answer that question, but I think it's a good question.

Ross Harley: Fantastic. Thank you, John. Up the back.

Speaker 14: Thank you. There's actually a word you just used. Jury based. And you mentioned evidence-based in your manifesto. Could you see a point where you could be working with, perhaps, the law faculty at university to see whether this tool could be used in a court to perhaps visualise ... I'm thinking of an example. A person who might have frontal lobe damage and the jury needs to understand whether that might've affected his volition or cognition.

Ross Harley: What a great question.

John McGhee: It's a fantastic question. I mean, there is a big part of VR which is about experience and being in some other person's shoes. So what is it like to be someone else? I mean, they often use this term, the empathy machine, where you can feel and see what it'd be like to be in another situation. It might make you more empathetic. I mean, I think there's lots of opportunities to work with other areas outside medicine for VR, absolutely. I still think being a designer it's always important to try and understand the problem fundamentally before we jump to solution. But I think your example of trying to use it as a way of exploring what it might feel like to have a certain condition for the jury ... Is that what you were thinking? It could be a really interesting one.

Ross Harley: Right just next to you, and then front.

Speaker 15: Hello. I was one of the SDS 9320 students last semester. I did 3D visualisation at UNSW. I started off in the computer science. I'm not a computer science student, but I took it as a free elective. I'm wondering, this still seems very much like an Art & Design thing, and I'm wondering whether there's opportunities either for teaching within, say, computer science of these principles and trying to enlist the help of programmers, or doing research projects involving computer science, either at UNSW or at other universities. Can you talk a bit more about that?

John McGhee: Yeah, sure. Of course. I mean, it's already happening. I probably didn't illustrate it because it was the lens that I was looking at in the presentation. But a lot of the stuff that we do is hugely dependent on collaborations with engineering computer science. A lot of the data pipelines that we work with we'd have to do collaboratively, and we actually now have a post-doc computer scientist in the lab, and we also have Epicentre which is another facility within Art & Design where we have a lot of computer science and engineer specialists that we're now looking at how we can enhance what we do technically and actually move forward with a lot of our work. So it's a hand-in-glove relationship. What we do can't exist without those relationships. But I think I often focus on the deficit area, because there's a lot of computer science labs around the world that are into VR. But there's very few art and design led labs that are actually probing the aesthetic. They're only probing the tech innovation. So that's why I slant my talk a little bit towards that emphasis. But you're absolutely right.

Ross Harley: Yeah, it's a great question. I should just add that at UNSW in our Art & Design school, many of our students do dual award degrees. In fact, half of our students do degrees like Bachelor of Media Arts with a Bachelor of Computer Science and Engineering. Many of our research projects are collaborative across the disciplines, and we have a new project which is called Design Why? And it's bringing these design-led research and education projects into engineering, into business, and into the Faculty of Built In Environment. So we're trying to work with our colleagues across the different areas. So it's a great question.
There's another one down the front.

Speaker 18: It's more a comment, really, but I'm so totally impressed by the technology. And my suggestion is it's obviously so clearly beneficial to students, but also it would be to middle-aged and older people because I go regularly to talks on health in every way, supported by Waverley Hospital and so forth. There are so many people there who are genuinely interested to really understand these things, and I can't help but feel this is absolutely a way of educating the intelligent masses in a really serious, preventative way that would lead to reduced costs in health expenditure.

Ross Harley: Absolutely. I think that's another great comment. I'm sure John has many thoughts, but I know that that's also something that we're very keen as artists and designers to try and work with our colleagues to have better health outcomes by preventing all kinds of circumstance. And so, education, engagement in all kinds of ways. Particularly these new forms of technical immersion in VR and so on are absolutely fundamental. John, what would you say?

John McGhee: I would reiterate what you've said, as well. I mean, there's a big chunk of what we do could be preventative around giving people experiences in VR. I think the challenge at the moment for trying to get different types of users to access VR is just the proliferation of headsets. There's just not enough people at the moment buying headsets. But I think that will come. I think the numbers are growing exponentially. I think looking at the figures for the new Oculus Go. So the challenge has been that the price point is still too high for headsets for your average person. They still don't see the value proposition in spending $1200 plus a computer. But Oculus know that and so they've just released a headset called Oculus Go, which is $200 USD and it doesn't require a computer, and it will be the must-have Christmas present, I suspect. So go and get them on your Christmas list. But it allows lots of things.

But what's interesting about it is, and sorry, I'm deviating from your question slightly, but [inaudible] to lots of people, and a lot of it's to do with cost and proliferation. So I think as the headsets drop in price and people get smarter with content development, it won't be these early adopters like me that are buying them. People will be buying it as ubiquitively as they're buying smartphones. And in fact, they'll be cheaper than smartphones at this rate, and you'll be able to do much more with them.

I found a really interesting example of how VR might actually be transformative. And it's a really banal one, but I think it actually highlights sometimes as researchers we miss the point in the commercial world. Netflix is on Oculus Go. So you can watch a Netflix movie with your friends and family in VR. And they can be anywhere. They just have to have a headset. And so the value proposition, being someone who has got most of my family overseas, to be able to sit down and watch a movie on Netflix together with friends and family ... I would spend the $200 just for that app alone. So I think that as we develop content for those, we need to think socially and we need to think around the human need, and then everyone will just buy in in headsets. It'll be ubiquitous. They'll be giving them away.

Ross Harley: I suppose that $200 is much cheaper than an airfare, and I suppose wouldn't contribute to other-

John McGhee: Yeah, exactly, Ross. Thank you. Yeah.

Ross Harley: One last comment and then we've got another ... Two more questions.

Speaker 19: I just want to drag one sentence that there's an increase in young children with ADHD, and I was interested to see you were using it for reducing anxiety so that the potential use for it in neuroplasticity is a very interesting question.

John McGhee: Yep. Definitely.

Ross Harley: Thank you for that. So we have another question just down here on my right a few rows back, and then over on the left.

Speaker 20: Thank you for that. So from a clinician's perspective, we look at metrics around clinical effectiveness. And we're doing a lot of work now around patient reported outcomes and experiences. But you also look for clinical effectiveness. One of the challenges with modern technology is we do a lot of technology projects, and of course they get great video, but the clinical outcomes aren't measured. What's being done to look for clinical outcomes as part of this research?

John McGhee: Yeah, it's still early days. I think working with colleagues at St Vinnie's, we need to build more longitudinal skill-based RCTs. We need to have more evidence base to show that it's making any impact, and it's just trying to look at that iterative funding that we can get to them see if it makes a difference. I mean, our hunch is that it will make a difference. But as you say, we need an evidence base to integrate anything in to the clinic. And that's where we want to go next with this. We've had the spark of the idea, we've built the prototype, and now we have to just test it at scale and generate some more evidence base to see whether it does anything.

Ross Harley: Thank you. Question just over here. I think to your right, madam.

Speaker 21: You described a study in your presentation where VR exposed people performed 5 percent better than the average, and non-VR people performed 35 percent less than the average. I'm curious. Why would the non-VR people perform less than the average? Wouldn't they be the same? How would you explain that, do you think?

John McGhee: I suspect it's just the group that we were working with. Yeah, we've debated this within the collaborators. It was Monash that performed the experiment. We built the content. Yeah. It's a very blunt instrument, that study, if I'm honest. It was the first step to first identify if we were making any difference at all. It may be that that question was not something that was backed up in the syllabus very well, so they all didn't really perform that well, and our intervention was a small piece of the jigsaw. Yeah, it's a good question. Maybe we'll have a chat after about all the different nuances about that one.

Ross Harley: Thank you. We've got time for one last question if there's any burning issues or questions. There's one right up the back. The gentleman in the cardigan and tie.

Speaker 22: It was a really great talk. I'm just curious about the aesthetic choices you make and working with a team and figuring out how to set a mood and a tone with the colours that you choose and that the sound things ... And if you ever have arguments about the aesthetic choices or how you come to a common consensus?

John McGhee: Yeah, you can imagine having lots of designers and artists in the room there's quite a lot of disagreement on how we might do that thing. I mean, working with the scientists, it was our quantum leap seeing their stuff. Seeing their data being shown in that way. But as designers, we wanted to set a mood and a style that wasn't replicating someone else's, that was unique. We were building a house style within our lab that relates to how we want to build content. Yeah, it's interesting. A lot of aesthetics bleed through from people's backgrounds as well. So I come from a design background so I'm looking for very clean and clinical lines and a very simplistic aesthetic. And John Bailey, who's one of the other designers, he comes from video game design, so that's why there's lots of particles and fog. And the last game that he worked on had a lot of particles and fog. And this is interesting. This is like part of our DNA as designers, and it's the things we talk about a lot in art and design about how we generate our aesthetic and frame it. So it was interesting. So it wasn't as absolute and binary as maybe working with the data. There was a lot of fuzz around how we developed that aesthetic. But that's just part of what you do as a designer. It's all about a lot of tacit components.

Ross Harley: And working with a lot of fuzz. And I think, on that note, could you all join with me in thanking Associate Professor John McGhee.