About this talk

In this talk we'll focus on creative coding using WebGL and introducing WebVR.

Transcript

- [Mark] Thank you. Thanks. So yeah. So hi. So thanks for having us back. Obviously, it wasn't that bad, the talk we gave last time. So it's good to be back. Yeah, so I'm Mark, this is Edan, and we are from a company called Kuva. We're based here in Bristol. We're a small, well, there's two of us, small design agency, and we tend to work with a lot of some new and emerging technology. We're doing a lot of WebGL stuff, and we do it for installations and VR which we're going to talk about as well in a minute, and just generally making interesting, cool shit. And that's what we get to do which is pretty nice. We're going to divide the talk up into two sections. Edan's going to start first and he's going to run some of the demos we make, some of the visual stuff which some of the stuff you'll see here and focus on the WebGL side. And then I'm going to jump in later and talk a bit about the WebVR and how that applies to our kind of stuff. So yes, I guess, I'll hand over to Edan . - [Edan] All right. So let's begin. So I would talk about WebGL involving a little bit of creative coding. So what is creative coding? Okay. So like by definition like creative coding means code creatively. But I want to thing here is that what I'm going to talk about is something visually interesting, beautiful. Something looks nice, but not, I don't know like if you have [inaudible] or do some creative coding. I don't know what it's going to be like. So example, so what kind of creative coding can be like. It can be hook up with a Kinect. You can see that some real-time people controlling the particles and stuff like that. Or analog stuff like using a [inaudible] to draw some...using algorithm to draw some pictures. Or using real-time data of the city map and do some data visualization stuff or using audio drifting visual. Okay, so but then next is like why WebGL? Like why do we want to use WebGL to do all that kind of stuff because typically creative coder tend use tools like cinder, openframework, processing before. That kind of tool. I'm not sure if you've ever heard of it, but yeah, those are those tool that normal creative coders will use. The reason why we want to use, why you should try to use WebGL to do creative coding is that mainly it's about distribution because once you make something cool, something nice you can just send the link to your friends, and then they just click, and they see it. Otherwise, like if you are using tools cinder or openframework because it's a [inaudible] you need to compile it, download it, and then maybe at the end you just upload a video online or something like that. So what do I need to learn to do creative coding with WebGL? So it's going to be API. So WebGL API is like this. Not just that, and there's some more, more. I'm not going to talk about all the kind of APIs here. I will just talk about those concept and ideas behind the creative codings in WebGL is that. So, all right, okay, so do you WebGL like if we want to do something more advance we'd use something called shaders which is like using GLSL, it's like some C-like syntax, and there are two kind of Shaders we can use in WebGL which is vertex shaders and fragment shader. A vertex shader is basically it's just to compute the position of the vertex which is the points. So if you are going to draw a triangle which is going to be like three points, you move three points to the screen and that will be your triangle. For the fragment shader, it will compute the color of the pixel you are going to draw. Say if you are going to draw a triangle, all those corresponding pixels is going to through the fragment shaders. So it's going to be like this in a JavaScript-like syntax. So it's like for each vertex it will go through the vertex shader, and for each pixels, will go through the fragment shader. Okay, it's pretty straight forward. But then next you're going to ask me questions like "Why do I want to learn Shaders, but not doing the autocalculation in JavaScript?" You may say like, "Yeah, I can do it in a bigger for-loop, and then do all those calculations of the vertex position,do it in Java Script." Yeah, he only big difference is the performance. So there's a easiest way to show..I'm sorry. To show you the different, what is it like. No. What's going on? Okay, so the difference between you use your CPU calculations and GPU calculations is like this. In CPU is anything is like sort of like linear calc...like a for-loop. Every single point you draw it like this. With doing GPU is a little bit different. You have a more powerful machine and have so many cores, and can calculate things at a big batch in the same time. So all right. Okay. - [Male] I thought it was supposed to be quicker. - It's like this. - [Female] Yey. - It is that fast because in GPU there are like thousands of cores, but CPU there's like eight cores. So it's like big, big difference. In GPU, it's very good at calculating very simple task. A big batch in the same time. So yeah, that's what you want. You want to learn shaders. Once learn something more advance. Okay. Next, all right. So and then it lead to a trick called GPGPU which using the GPU, that technique simultaneous training thousand of pixels at the same time, that performance to calculate something like particle movement. So we call it GPGPU to use GPU to calculate general stuff. Example would be like you basically behind the scene, draw two triangle as opposed the...when you draw the color instead of like actually calculate...doing something like drawing a image, you can do the calculation of, for example, particle position will be XYZ value. So you store in a RGB channel. Something like that. I'm really bad at talks. So I'm going to show some demos instead. Easier. So this is some demo I created a while back called particle love. So particles. Okay, this is WebGL. So it use that GPGU technique I mentioned before and calculate all those particle and motion blur editing. And I'm going to show you a couple demo. The first one I'm going to show is touch with leap motion. I'm going to use leap motion here. Hopefully it works. So it's like some physics. There are 200,000 particles and I have a hand model to play with the particles. Very cool. Yes, so the technique behind the scene is I used something called sine system function which is like it's very simple thing. It's like given the 3D position, you pass it to your function, the function will return the shortest distance to the surface. So I use this...I use this kind of function to do the physics like this. Okay, all right. So I'm going to talk...go to the next one. The next one is one is I call it... - [Male] [inaudible] - Sorry? - [inaudible] - Yes, yes, it is in Chrome. You use for the screen. This one I call it constrain. It is like, sort of like artsy kind of experiment I did. Because I did a lot of particle experiment. So this one I decided to use something...this line, joining line basically use...calculate two point position have a little bit of constraint base on the distance and the force, and create something like, I don't know how to call that, it's some...something. And then are some parameter you can adjust it. So you can adjust the constraint, the force, and then create some interesting look out of it which is pretty cool. All right, and then the next one would be the spirit. Yes, it's another particle experiment. This one is a little bit interesting and most of the particles, particle system created before, they use...either use a gradient a circle or they would some square, be lazy, but this one I used is, if you zoom in, remove the motion blur is triangles. So it's a funny trick that to draw a triangle up side like this, and then the next frame flip it the other side, and then with your eyes illusion, you think that half a gradient because when you draw a gradient in the GPU, you need to perform some blending. Color blending in GPU is expensive so this is a funny trick that to use your brain...to trick your brain to do something to optimize the program which is pretty fun to try on something different. It's pretty cool. You can change the color and add thing like that. Okay, and then after that experiment, I thought I done enough particles, but then I keep doing it, and the next one is called icicle bubble. It's more about the rendering. It's some way or trick I sort of reinvent. I thought I invent it, it turns someone else did in some demos ages ago. So [inaudible], I wrote a blog about it. Yeah, we move the...it has some like light scattering that is all fake. The idea behind is just, I don't want to go through something too technical, but if you want to know more about it, you can go to this website, click that link and then I have...I wrote a blog about it. So yeah. And then the last particle experiment I did of course, yes, it's called hyper mix which is sort an extension of the previous experiment, and this time I mixed...I create a 3D vessel, texture, and then mix the color together. So I'm not sure what is the best way to present that because I have 500,000 particles. You can see that there are two emitter. One is pink, the other one is blue, and then when they mix together, you have some weird, ugly color, muddy color. So this is the another experiment I did, yeah. So done with this particle experiment, right? Okay, cool. All right. Sorry. Okay, all right, if you want to check it out, you can go to that my website, and you're going to see those experiment I did. And I'm going to talk another experiment we did in house called hair. All right. So because I've done or we've done a lot of particle experiment, we want to try to do something different. We want to do something like hair kind of thing. It's just some R and D try to understand how we can push the limit of doing WebGL creative coding wise. This one is very interesting. You can see that all the lighting is very nicely done, but there's a trick and also, wait for one second. It's dynamic, you can change the color and anything, change the intensity of it, and all the lighting looks like there some global illumination in, but actually I just pre-render things into [inaudible] texture like this and then real time change it, and then...to show it in Three.js. Okay. And in this hair experiment I did different models. I got this one. So this is yeah. Of course you got to play with some hairs, you cannot do some hair experiment without playing with hair. So we're going to change the hair length like this. I think it's pretty cool. Yeah, and change the radius or do some weird things. I don't know. It just doesn't look right. I think the model is doesn't look good. And also I did animated version, not using his face of course. There's a [inaudible] animated model and then assign the hair onto his surface and then make it dance. So yeah. That's the end of my part. Great. All right, so I'm going to shift it to Mark to talk about something awesome stuff with WebVR. - Yes, so, I'm going to talk a little about, yeah, WebVR. So you've seen a lot of the cool demos, the kind of stuff that we do at Kuva, and some of the work we're up to. This is kind of a new stuff, new field that we started to get into quite a bit, and I was going to do a big talk about the history of VR and where it's all come from, and the context of it, but it was just not...itt was just too much. But I did really want to just show you some really awesome images of '90s VR. If that was what was we've [inaudible] WebVR was like now, that would just make my day. She just looks really happy. So my kind of experience of VR in the '90s was pretty much like this. It was in the place called [inaudible] in London which if anyone was from that area, it was just a big place...big games place. You could go there, and it one of this five pounds apart to go and spend two minutes. It was rubbish. It was really shit. And yeah, so it was really bad, and this is the "Futurama", this is what their vision of what VR was going to be with this lovely, sort of neony colors and stuff. But, - [Male] LCD the screen. - I know, I know. The funny thing about is I swear Edan has that keyboard. I swear he does. But yeah, we all know what happened like the whole VR thing just died, and we'll not going to go into it too much about how we got where we are, but the hardware was terrible. Those things that you had on your head. Those were CRT screens. They were monitors you'd strap to your face. So it just wasn't going to work, and they were so heavy that it had to get a sort of counter balance in the back. It was all wrong. So we all know where we are today with it. These are the kind of stuff that you see everywhere now. The HoloLens, the Oculus, the Sims, the GearVR, the HTC Vive, and the Daydream. It's everywhere. Everybody is doing VR now, and of course, we're talking about this WebVR. There's all these reasons why we seen this renaissance, why we seen this revival. I'm not going to go into too much detail about it. It's basically, the technology has caught up, and primarily the displays have got better. We're not CRTs like that in our face. We've got LCDs which is lighter, much higher resolution, they have faster refresh rate, and the persistence between renders is much, much better. There's a load of things. This is the key thing that we try and aim form with any VR experience whether it's WebVR or VR is this, we need to render at 90 Hertz. That's 90 frames per second with a really low latency. I mean, this is not exact science because people have different perceptions of VR, but this is what the general consensus is. So yes, what is WebVR when when we talk about? I think it's kind of confusing what people think WebVR is because it's not really defined unless you've looked into it properly. And its simplest terms, WebVR is just in the JavaScript API to interface with VR hardware. And what we specifically mean by that is the visual components where VR is this umbrella term for hacking senses, but in this instance we're specifically talking about the visual component like the Oculus Rift, the Vive. All those devices, so we can now or very certain, we have to interface with. It allows developers to create VR content in the web or with web technology should I say. So this is not very far away. it's going to land soon. You can create this content. The big caveat with it is that it only works with WebGL, and that puts it in a separate area away from traditional web development which is a shame I think, and I think browser vendors, and everyone involved is really trying to bring it in line with normal web development because it makes a lot of sense. WebVR is not a 3D world with browser pages flying around like that. That like with like Facebook. I think people have asked me about that, but I don't know what that is. That's just something mental. So what can you do with this technology? Now, I've got to find out where is the mouse. There it is. So just to show a little demo, something we did in this fluid system. It's a like curl noise simulation. Now what you're able to do with WebVR is your able to put someone in the middle of that, and just place them in the space so that they can move around. Not only can they move around they can actually interact with that space. So they can look around, they can move down, they can move...look above things, they can move outside of this box. It's an interesting thing to do. We've tried this out with a HTC Vive with the controllers, and it's really good. It's a really interesting experience. I'll show you a quick...where is the mouse. Anyone see a mouse. There it is. This is a video that we've recorded. It's really hard to demonstrate VR in a presentation because it's a single person experience, but let's try it again. So this is a record of what we've done in VR, and this is all in Chrome, again. You see you've got the left and the right screen that look...this because obviously in VR you need to render the left and the right eye to get stereoscopic vision. That's where we get our depth keys from is this left and the right rendering. So this is just a quick video that shows how someone's interacting in space. They're using this Vive controllers. These are models that we've got from online, and they're using it to create and draw in the space, and we've got this on our website, and it's not enabled right now, but if you've got a kit you can go and play it if you've got the right browser. So as a sort of an API. We look at the WebVR API? This is what's going to be exposed in JavaScript. It's not far off. It's going to be released in chrome next month, on Android devices, and then in I think in April on desktop VR. It's really not that complex. So I'll just you run through it here. This is basically the usage. So basically, it's an extension to...this is the a VR. It returns a promise that when it's fulfilled it return to list of VR displays are available in that device. Usually, that's only one unless you're rrch and you've got four VR devices connected to your machine. So that returns and then you just initiate the render. That's initiate, and this is pretty much what we're talking about with Three.js. You initiate the rendering of the VR content. And of course, if it fails you just catch that. So it's not very complicated. It's got very tiny surface the WebVR API. This is a little bit about the display that's returned, very self-explanatory and your request present to the VR device, request animation frame. It's different from your window request animation frame because you need to run at 90 Hertz, not at 60. So this is actually a different render loop. This is where you get your positional data. So it returns information about where the headset is located, where its position, where people are looking. You can use that with the camera. And then stage parameters are to do with if you've got a room scale environment like a Vive where you can actually define an area, a safe area to work in. You can get access to that in JavaScript, and capabilities just give you information about the device that you're working with. So it's pretty simple. And we got a bit of a demo to show you. So we've got some cardboard. - [Male] Cardboards. - So we were talking about how cardboards shouldn't be used, and now dishing that out like. - [Female] It's so brilliant, [inaudible] - So yes, so grab some if you want to have a look and then we're going to fire the link up and you can have a quick look at this WebVR demo. This [inaudible] we've done. Pass it around. Now just a couple of things, if you've got. If you want to go to this link make sure you put that in. We've mess up our hosting environment, and if you do without you'll just see nothing. So with Edan's [inaudible] work and stuff like that. So just make sure you've got in. Who wants the nice one? Anyone want the nice ones. This ones we want back. So yeah, so basically I just want to show you a quick demo how to easy it is to get a Three.js scene into a WebVR, and we've done a little bit of a hello world demo which isn't very exciting, but it shows you what it does. Now, we haven't really tested this on many devices, so if it doesn't work on your device just hand it to someone else and hopefully they might show you on their device. It's pretty simple. We we're talking about earlier about the Three.js scene. This is a Three.js scene and we've put some boxes up in the space, and then all that happens is you move around. Think about the Three.js stuff that we saw earlier. This kind of stuff is what you'll see. This is what the usual suspects of doing Three.js rendering. We are using Three.js here, but you can of course you can use any webGL library. So you can use... Yes, sorry about the flakiness. It's Edan's fault. So basically we have a renderer, we have the scene, and we have the camera. These are the three things you always have in a Three.js scene. These are the two things that you probably haven't seen before. The VR control is basically what enables you to look around, and the VR effect is basically what does the rendering. Now you always have to render the left and render the right. Now we set up the world. This is just content. I didn't include this because it's basically not part of the VR section. This is what I view once render in there. And then we have the render loop, very similar. Control, update, render, request animation frame which is the display object which...so that render at 90 Hertz, and then this bit at the bottom which you just saw which we get the displays which trigger, which shoot off render loops. So it's really not that difficult. I mean, I'm sort of...there's other elements to it and you got to consider performance aspect, but as way to just test it out, it's really not too difficult. One thing to not there is there is actually a polyfill for WebVR so you can...I mean, this is how you're able to use it now because WebVR hasn't landed. It's due to land soon as I said, but this is a polyfill that was created by Boris Smus. He's a Googler, and he basically just added this code that polyfills WebVR API so you can use it now. And it does it in very simple terms. If you're in a mobile device, and you haven't got it, uses the device orientation or from the desktop it is uses the mouse. So when we're looking at a mobile now, we're probably looking at it in a very unoptimized way because when VR is actually ready, it won't turn off the screen. You won't have those issues with screen locking. It's probably faster because it's got an optimized rendering path, and there's a lot of different elements but it's just nice to have this polyfill that's available at the moment. Yeah, also what about interaction? There is what we've seen, there's not really an interaction. You're just looking around in space, It doesn't do anything. There is a way which was...it was originally in the WebVR API, it's now being moved out to the Gamepad extensions where you can use the VR controllers. It allows haptic feedback from devices, vibrating and stuff like that. So you've got... that's how you would access that, and that's how you interact with things in the space. There are also...we use Three.js for that demo. There are also...there's a lot of work going on in other spaces in terms of WebVR specifically. For those who are familiar with React there's a VR thing for there.They work which is like a decorative API to do the same thing, and it abstracts away a lot of the problems of setting up a render loop. A-frame is really good I've been told. I've not tried it, but it's built around Three.js, and it's a really good way of again a decorative API for writing WebVR. SO you can actually setup objects and boxes in a nested sort of function. It reads really nicely. Carmel is the browser by Oculus which is a WebVR browser. It's worth checking out. And PlayCanvas is a platform and rendering library that you can use to do WebVR. So there's lot of stuff going behind the scene. You're not just limited to Three.js. So that is one of the preferred better options. So yeah, this is great, but what can it be use for? What are its applications? The obvious one is...I'm just waiting for the time for someone to go, "Could you build this 3D shopping cart or something?" I think that's the one that's going to come and everyone is going to see it, and we're waiting for, but as a studio we use a lot of web technologies and we appropriate them for things that are outside the web. So we do installations mainly. This is a little installation that we did or I did last year. And this is a WebGL, another WebGL animation. It's not very complex, but it looks nice. And we build this as an installation that was going to for the Geneva Car Show This screen here, this big display, and we had this animation that's running on the background. Now when we were making with a client we would send them this link to this web browser, and they could see it in situ as a big animation just as you see. However, we thought that if we added a WebVR component to it, we could mark up the screen, and we were able to put that animation as a texture on to this model. And what that would allow, the client could see there and see the animation in context which is a great design tool unless you see how this animations work in situ, you don't really understand how they're going to work. So it was, yeah, it was a really good way of doing that. I just find my...and the other thing that we've done was this. This is something we were playing around with a couple of weeks. Now it's not strictly WebVR, but it's pretty cool. We're using the Vive controllers. I don't know if we've got. Have we got one in here? - I don't think so. - We didn't bring it, okay, We're using Vive controllers here. So we're basically misappropriating the positional tracking in the Vive, and this is all running in chrome. This is big screen, and this is the screen. Notice Edan's lovely slow dancing with the monitor, and you got to move really slow otherwise the tracking works. But this is all running in chrome and what we're doing here is we're using the positional tracking as you just saw the light housing which is the Vive thing to do projection mapping. Now projection mapping is not a new thing. People have seen it before, but it's pretty cool to see it done in chrome. Hang on. So you see here what we're doing is we're using the Vive to map the screen what we're doing is we're effectively calculating the orientation and the position of the screen in the room. Actually in the physical space, and by doing that what we can do is then we can use the actual controller and the position of the camera to move it around to get a 3D position and we could re-render the Three.js content in a perspective maps way. And then, what that enables also, you can actually connect, track multiple monitors, and you connect loads of things. There's a keyboard see. I told you. I told you. So we were able to...we can do it as many as possible. We were just going we're going to put screens there and there and like just everywhere, but we run out of money. But yeah, this is just kind of thing. Of course, it's web technology you're never going to use it. You'll never going to put to a client and say I've got a great idea for a website because it won't work, but it's great to see what we can do with existing web technologies. We've already seen it with appropriating what you can do and building something else. And that's a lot of what we actually do at our studio. Now this is another that we're working on. This is...I'm going to finish on this one here. This is basically a project we're working on with a company called Marshmallow Laser Feast who are based in London. They have got some awesome scans of the sequoia trees, the big red woods out in the California. And they've gone out and scanned these things and made a VR experience about it, and it's amazing. It's brilliant. It's beautiful and we're working with them to bring this to the web to do a web experience based on this, and we're so excited about it because just to be able to have people online to be able to just put up a device, and we're hoping to get to a mobile, not big Oculus Rift. This is for mobile, and yeah, these are the guys that we're working that's the scans in the various woods. So that's what we're sort of we're working on next. So we should have done that within a couple of months. We're hoping as WebVR actually rolls out. Yeah, and that's it. That's us. So yeah, so you can hit us up on here. That's our Twitter URL, that's our website. We've got not much on our website at that moment I reckon, and we are hiring. Any questions? I'm going to throw this to you. - [Female] [inaudible] - All right. Okay. - [Male] My question was about the positions...you're capitalizing with physics and the positions with the graphics? - For some of the demos? - Yeah. - Yeah, I mean, well, this is part of this GPGPU stuff that Edan was talking. So a lot of demos we're using like used a noise function which is basically it's hard to describe, but it's a function that you can give it parameters. - Perlin? - Perlin, no, no. It's called curl. Curl noise. Curl noise is actually a derivative of Perlin noise. I mean, if you've seen it before it's one of those if you've seen Perlin noise before you can recognize it everywhere, but it's curl noise which calculates the curl which is the movement of something in that flow field, and it creates this noise beautiful forms that you can do nice things with, and then you can add these assigned distance, functions that Edan talk about to push things away. So it's a really good, it's a really fast way to approximate fluidy simulation. - [Male] So you put it in as a texture? - Yes. - -How? - You could use anothershader...in the vertex shader, we read back from the texture with it because in the vertex shader we move around the position, so we read the position and then we can move the particle to that. - So you basically....you use big texture as big 512 by 512, and you store every position inside as a color value, and then you read that out, and then you write back into a different texture. It's kind of based around that, and then like in WebGL too you can use was it transformed feedback? - Yes. - Which is a similar way to doing it. You can do vertex operations on a big [inaudible]. Do the similar. It s a hard one to convey I think. - [Male] The other question was do you have [inaudible] that are simpler example than those ones. Showing that part of it? - The curl noise, and the particle effects? - Processing the data. - No, I don't think we do, but there might be. - Actually like even on the three.js example. If you look similar [inaudible] or something that like there is an example teach you how to do this kind of ping pong drawing. - Ping pong? -Yeah. -Anyone else? All right. [inaudible] - [Male] [inaudible] creative stuff. You [inaudible] entirely on Three.js are using the control from the HTC? - From the HTC Vive, yeah, yeah. So, yeah, we basically, the controls, well, for the Torch and the Vibe, super precise. They really got resolution, and they can measure very small distances. So what we would do is we would have basically a controller. We basically record points, and what does is that gives you the surface of the screen. And if you know the surface of the screen, you know which way it's pointing, and you know the position of it. And then we basically use that, but then we stop rendering content in it , and then we take the controller and with the camera, put it by the camera, and as we're moving the camera around...when you have a camera you have a projection matrix, and the projection matrix the camera is there, and it projects like that, but obviously if you know the point, the origin of the camera you can move it around in space and you use that same camera, and then you can basically, it gives you that effect. And of course if anyone is also standing in the room, it just looks wrong. I mean, if you've done projection mapping that is just one of those things, but yeah. Anyone in between just to make the throw less. - [Male] On the leap motion, we're using the leap.js? - I think so. I was using the official because the basically, you don't run any web software. It's sort of have extension or something doing it behind the scenes so I don't need to run server to do that kind of calibration. [inaudible]. - Yeah. - That kind of library. - [Male] So you alluded to this earlier about WebGL two supporting transform feedback, but I was wondering with [inaudible]. Are you guys, do you have hard requirement on WebGL two? Or are you targeting GL one or what? - With [inaudible] in the target like we did...I think we formerly did. I think. - The hair? - The hair. - The hair. Use as extension in WegGL one. - It's tricky because the Three.js which is obviously the de facto standard for using these things, it hasn't got support for WebGL two. I mean, I'm not really sure. I know there's a lot of people on it, but there's nothing really there, and it's really hard to add that on to it, but you essentially remove any of the benefit of using Three.js. So we started toying around a much lower level API. Just a little bit above WebGL that makes a lot easier to just add these figures as we need it. We haven't done a lot of with WebGL two, but there's no really hard requirement on it, but there's loads of things to play with. - [Female] Have you [inaudible] - The assembly stuff are you talking about? - Yeah, those kind of [inaudible] - No, we haven't. We've been looking at the whole web assembly stuff. I think I was talking to someone about it earlier sort of following. I've not really done anything with it. It would be great. We'd like to see it because there's a lot of stuff that you can do so you can compile C stuff down to like to essentially binaries. But yeah, I mean, not really, but it's supposed to be a really a reall change in development because you suddenly got crazy speed access to do. Some crazy things in JavaScripts. So we'd love that I think when it came along. - [Male] Okay, you just want to sit. Please give a huge thank you to Mark and Edan.