OBS HueShift Shader

A few weeks ago I was watching Gael Level on Twitch. And during his stream he had this fun effect going on where he shifted the hue of the camera. I later spoke with him about this and asked what the plugin was that he was using. It turned out there wasn’t any and he was creating the effect by hand… Since I’m a developer and don’t like repeating things, like clicking, I decided to automate this.

The Shader

In OBS you can install plugins to help with various things. There is an extension to add shaders to various elements. It’s called the OBS ShaderFilter. You use this to load and execute shaders written in the HLSL language. I wrote the following shader:

You can also download it here.

Adding it to a camera

To add the shader, you’ll have to add it to a source in OBS. Just right-click a source and go to filters. Click the plus icon in the lower left corner and add a user-defined shader. Check the property to load a shader from a file and select the file containg the shader. You can tweak the properties until you have something you like.

And that’s it.


Shout out for the idea of this effect:


Twitch Stream Setup

Whenever I have some spare time I get into coding on my personal projects. A few years ago I started live coding, on LiveCoding.tv at the time. There were some people streaming programming on Twitch at the time, but only game devs and that was not what I was doing. I tried switching over to Twitch, but the lack of growth of the channel and lack of motivation from my side led me to stop. Earlier this year I decided to pick it up again. This time only on Twitch, where the community of Live Coders is growing fast. I felt welcome in the community. And I got affiliated pretty quickly, thanks to the support of the community. If you are looking for people coding on Twitch, you should have a look at the LiveCoders team.

A lot is going on when running a stream. In this post, I’d like to give you a look backstage to what is going on during the streams.

Of course, there’s audio, video and a view of my screen. But there’s a lot more going on…


Let’s have a look at the hardware I’m using for the live coding streams first. Because I’m doing projects with VR and graphics for work as well besides web development, my employer (Centric) was kind enough to provide me with an Alienware 17 R5 laptop. This is perfect for 3D modeling and running the Oculus Rift. But has also enough power to encode video for a live stream. Where I used to have issues with my previous laptop, like dropping frames and such, now this is completely over. I will get into more detail about OBS later.

This is the list of hardware I’m using:

  • Alienware 17 R5
  • HP LA2405wg Monitor (old, but still works fine)
  • Das Keyboard Professional
  • Logitech MX Master 2S
  • Logitec 920c
  • Blue Yeti
  • Elgato Stream Deck
  • 8 port Sitecom powered USB hub

Software-wise I’m using OBS to stream. I’m using VSCode (mostly) to code with the Synthwave’84 theme. For the audio, I use VoiceMeeter Potato to route the audio and to have fine control over the volume of everything. I use SoundByte to play soundFX. The music I play in the background during the stream is Monstercat. I build a custom player that runs inside OBS and renders a visualization on the audio. I’ve also got a chatbot running, which I also build myself. Her name is Rosie, named after the maid in the TV series The Jetsons. At the moment I’m using StreamLabs for the alerts.


In the early days of my stream, I used a cheap headset. The quality of the audio was very bad. The only benefit was that I could take it and stream from pretty much anywhere. I have it replaced with a Blue Yeti. This is a UBS microphone which could be used on its own stand. I have removed this stand and screwed it onto a boom arm. During a stream, it’s just below the camera, between the keyboard and my mouth. When I started streaming I didn’t bother too much with the audio. But slowly I’m starting to get more and more obsessed with it. I think it’s one of the hardest things to get right when streaming. And I know that at one point in the future I will replace the Yeti with a good XLR mic and replace VoiceMeeter with a hardware mixer. These things cost a lot of money though.


During a live stream, I have a couple of things going on that make sound. I want to have full control of what the volumes of each are. I used to have everything set in OBS, but that wasn’t fine-grained enough. I needed more control. Then I came across VB-Audio VoiceMeeter. Since I wanted to have the most control possible I got the VoiceMeeter Potato. This tool is donationware, which means you can pay whatever couple of $ you want for a license.

I have my mic, the music, sound effects, and my desktop all going to separate channels. I also use a channel in VoiceMeeter for a USB mixer I use sometimes outside the streams. I also have a channel reserved for Spotify. I’m not using Spotify anymore during the stream but might listen to it while working.

Let me explain my VoiceMeeter set up in a little bit more detail. Here’s a screenshot of my setup:

If we go over the channels from left to right, we start with the channel for the microphone. I’ve routed this to B3 only. The ‘B’ channels on the mixer are virtual outputs, where the ‘A’ channels are routed to real hardware outputs. I’m only using B3 for my microphone. This way I can get it onto a separate input in OBS and have OBS mute it.

The second channel is the output from OBS. I’m using an extra free program from VB-Audio here as well: VB-Cable. This program gives you an extra ‘hardware’ audio output to work with. On top of the 3, you get with VoiceMeeter Potato. I’ve routed this channel to B2 for use in OBS and A1. A1 I’ve routed to my Blue Yeti. The microphone has also a headphone output and volume control on there. Without routing the sound to here I won’t hear anything but my voice.

I’m skipping the other hardware inputs since they are not used for streaming, and go to the virtual inputs. For the stream, I use only VoiceMeeter AUX for my desktop audio and VoiceMeeter VAIO 3 for the sound fx. Both are routed to B2 and at least A1

Here’s a sketch of the setup.

As an experiment, I tried having the audio on channel 2 going only to stream (B2) and Spotify only to my headphones (A1). This actually works :) This way I can listen to some music while the viewers of the live stream listen to something else. I’m not planning on using this while streaming, but it is nice to know that it is possible to do things like that.

If you have trouble getting the outputs of your applications routed to the right channel in VoiceMeeter. Try going to the Windows Mixer, by typing mixer in your start menu. Windows Mixer

Within these settings, you can specify which output should be used for every app. I have set SoundByte to output to VoiceMeeter VAIO3 Input and OBS to CABLE Input.

Normally you don’t need to route OBS to separate channel. If you are using alerts from StreamLabs with sound you might, but you’ll probably be fine without. I wanted to do something special with the audio. So, I created a music player to play and visualize the audio myself. Since this is running inside a browser window in OBS, I used VoiceMeeter to control the volume. Windows Mixer

Audio inputs and routing in a list:

input from routed
Blue Yeti Microphone    Hardware In B3
OBS (music) VB-Cable A1, B2
Desktop Audio VoiceMeeter AUX A1, B2
Sound FX VoiceMeeter VAIO 3    A1, B2


Inside OBS I don’t have to do a lot with the audio anymore. I get the correct mix and the mic on a separate channel. I added them to two different input channels in the Audio Settings in OBS. OBS Audio Settings

Instead of using the VoiceMeeter names I renamed them in the mixer so it’s clear what both channels are. I’ve set the volume of the mic a little bit higher than the rest of the audio. OBS Mixer

To make my voice a little bit better I’ve added a couple of filters to the mic. In OBS you can use VST Plugins. These are plugins that are very common in audio programs. I’m using the free Reaper VST plugins to improve the audio. The settings are not perfected yet and I’m constantly improving them to create a better sound. OBS Voice Filters

If you don’t care too much, you only need the first. This is a noise gate. A noise gate creates a minimum level of audio to be used. If the audio is below this threshold the audio is muted. This removes any noise the mic might pick up when you are not talking. I’ve used the one that came with OBS. You’ll have to play a little bit with the settings to find the settings that are right for you. OBS Noise Gate Settings

Noise Suppression is the second most important filter I use. This filter removes a lot of the noise at the times you are talking. There’s always a lot of background noise coming from my PC. Noise suppression takes care of that. In this case, I’m using the ReaFir VST. You can train this to create a noise profile and use that to remove it while talking. I’ve also created a second noise suppression profile for when my fan is blowing. I’d rather not use this one since there’s a lot of suppression going, which affects the sound a lot. These are the settings I use normally: OBS Noise Suppression Settings

Compression is used to balance the louder and quieter moments while speaking. This makes sure the audio doesn’t get distorted when talking too loud, while also boosting a bit when talking softly. OBS Compression Settings

The last filter I use EQ. This filter is used to boost or suppress frequencies. Both the compressor and EQ filters are in constant motion, I tweak these a lot during streams. OBS EQ Settings

There are a lot of tutorials on these VSTs available on YouTube. Like this one from Tuts+ Music & Audio.


The first purchases I made for the stream, I think even before my first stream, was a webcam. From the start, I’ve been using a Logitech C920 Webcam. The quality of this camera is pretty good, for its price. It is very easy to set up. Just plug in the USB and you are good to go.

In OBS I have 1 webcam source I use everywhere. It is a bit laggy when started and I don’t want it to restart when switching scenes.

For settings. On the webcam settings, I disabled all of the auto adjustments. I don’t want anything to change outside of my control. I’m not moving around so the focus doesn’t have to change. And I have people control my lighting through chat, therefore I want to keep white balance and exposure always the same. The only problem I’m having with these settings is that they are reset now and then.

In OBS I’ve added a couple of filter on the camera as well. OBS Webcam Filters

The only one that is making a real difference is color correction. You’ll have to play with these settings yourself to see what you like. OBS Webcam Color Correction

I think that without color correction the video seems a bit too gray. OBS Webcam without Color CorrectionOBS Webcam Color with Correction

The other two filters I use is a crop for changing the width of the video a bit to make it better fit my layouts. I also add a little tiny bit of sharpness (0.05).


At the moment I’m using 3 lights during my stream. 2 of them can be controlled by the viewers by giving a !light command in the chat.

I use these lights:

I’ve got the white IKEA light pretty close to the left of me. The Neewer is a bit further away and pointed towards the wall to give a more diffuse light. The colored one is behind me, just outside the camera view and illuminates the back wall.

I bought the IKEA light just to try them and see how they look. I’ve got a couple of Phillips Hue lights around the house as well. The great thing about the IKEA smart lights is that they connect to the base station of the Hue lights and can be controlled in the same way. If you have a Hue bridge you don’t need to buy anything else but the light bulbs to use those.


The application I use from the stream itself is Open Broadcast Software or OBS. This program lets you create scenes, configure what you want to capture and cast it to various sources, like Twitch or YouTube.

I’ve never used any other program for streaming. I’m using it to stream to Twitch, but I’ve also used it to stream to YouTube or record videos. You can have different setups stored and it is easy to switch between them.


OBS makes use of scenes. Scenes are what you see when watching the stream. They build out of various sources like the webcam, desktop capture, animations, texts, and browser windows.

I recently cleaned up the scenes and the sources. It was a mess with all kinds of old, unused and hidden sources. I created 2 scenes that are reused in various other scenes, alerts, and texts. I also added color so I can quickly see what sources are where in the seen. To be sure I don’t accidentally move a source, I locked everything.

OBS Scenes detail

Pre stream

A few minutes before I go live I already start the stream. I send a tweet out at the same time. At this moment my followers are informed that I went live. Having a count down or waiting room gives everyone a few minutes to come in before I start.

I don’t always use the timer. If it’s very short or a weird number of minutes before I go live I hide the timer. I added a ticker that shows random texts, just for fun. This is actually a browser window that’s coming from my layouts application.

The chat is an overlay coming from StreamLabs. I styled it to look similar to the theme I’m using in VSCode.

Pre stream - Webcam

When I start the stream I welcome everyone to the stream using this scene. It has chat and big webcam view. I kept the ‘almost there’ text in there so when people pass by the stream the know I’m about to start.

Regular Stream

This scene is what is used most during the stream. It has chat, webcam and a view of the desktop. The background animation in this scene has a mask and is actually on top of the webcam and the desktop view. I want to cut off my Windows taskbar and the easiest way of doing that is just hiding it. I tried using a transparent animation but that was way too CPU intensive, so I used the same MP4 but added a mask filter to it with a black and white image.

This scene has everything else going as well, the exploding emotes, alerts and the music player.

Regular Stream - Webcam

This scene is the same as the previous one but has the webcam and desktop views switched.

Be Right Back

Sometimes I’m interrupted and need to leave the computer for a few minutes. It rarely happens, but when it does I use the Be Right Back scene. I’ve got the chat and the alerts in there.

Post Stream - Webcam

When the stream ends I switch to this scene. It has a big webcam view and the end credits to the side. The end credits are coming from my own layouts. It doesn’t have a chat, but still has alerts.

Post Stream

The last scene is similar to the previous but without the webcam. Sometimes I want to have the stream running a little bit longer, for example when I raid someone. When the raid happens I switch to this scene. The raid is not recorded by Twitch, but the normal stream is. This way you won’t see my moving around but still have a few seconds extra when watching the VOD.



One of the most used pieces of hardware among streamers is the Elgato Stream Deck. This is a device with programmable buttons. Each button is a small display that can show information. The device is very powerful. I can’t live without it anymore.

Stream Deck

The Stream Deck has a couple of great features I use very often. The first one is the ability to trigger multiple things with 1 button, a Multi-Action. I use this for example to start all applications I need for streaming, or to change a scene in OBS and mute the Mic at the same time. I also have a couple of Multi Actions that I use when I go live. These send out tweets, set the title of the stream on Twitch, select the right scene in OBS and trigger OBS to start the stream.

Bot & Tools

To have a little automated help (and fun) I created my own personal stream maid, Rosie the Chatbot in Node.js. She is inspired by the maid in the old cartoon The Jetsons. During the steam and in between I add commands and features to this bot.

To give an example, I created the !light commands. These commands use the Phillips Hue API on my local network to change the color of the light behind me. This light is also triggered when events happen during the stream.

Also, the sound effects are a lot of fun. These use Midi notes to trigger the effects in SoundByte. I also use Midi to lower the volume of the music playing during events.

I integrated Rosie with the Microsoft QnA platform. When a question is asked in chat, Rosie does a call to this service to see if there’s an answer to frequently asked questions. For example what theme I’m using in VSCode or when my next stream is.

During the stream, I also run another Node.js application that is responsible for the overlays in OBS. The exploding emotes and even the music is run from here. For the music I create a player without controls that just plays a random song from a folder. I have another page that is connected through WebSockets to control the music. I use the web audio API to create the visuals of the audio.


I think that’s about it. If I forgot something I’ll add it. Feel free to ask any questions about the setup, Rosie or layouts during my streams. I’m happy to help. So come and visit me at twitch.tv/sorskoot or join the discord.

JS13KGames 2018 Post Mortem

And another JS13KGames competition is over. This year’s theme was ‘Offline’. Since BabylonJS was allowed too in the WebXR category this time and I had been looking into Babylon lately, I decided to go with that. At the JS13kGames website you can play my entry, Lasergrid.

The first concept I had in mind was in a factory setting where the player gets an order on a display in the form of a selection of different colored objects. The player then has to look at the conveyor belt and push everything that’s not in the order off. The problem that I ran into very quickly was that I couldn’t use the default physics with Babylon. Although there’s a library for physics available for Babylon, it’s an external library that I couldn’t fit into the allowed 13kB. So I went with my second idea, a laser grid in which the player has to turn object and make the laser get from the start to the finish.


BabylonJS and Virtual Reality

Setting BabylonJS up for Virtual Reality is very easy. It’s not ‘on’ by default, but there’s a helper function to add everything including the icon to switch to VR. It is usually called right after initializing the scene: this.vrHelper = scene.createDefaultVRExperience();. Moving around in the browser works right out of the box. Teleportation in VR is also very simple to add. I created a mesh for the ground and called enableTeleportation:

    floorMeshName: ground.name

With that out of the way, I added a simple data structure for the puzzles and created a few basic cubes as objects to work with. When these are clicked in the browser they would rotate.

With this part working I was confident enough to start working on the full game.

Challenge 1 - Textures

Since even a PNG gets larger than 13kB very quickly texturing is always a challenge. When you try to shrink down images you quickly end up with a pixel art look. So that was the look I went for. Of course, I used my favorite pixel art tool PyxelEdit. PyxelEdit Applying the textures to the models was giving me some issues as well, which actually takes me to the next challenge:

Challenge 2 - Models

The models for the game would exist of very simple objects. In the proof of concept, I used a box and an extruded triangle. At first, this worked fine. When texturing the models I learned that Babylon stores the UVW information in the vertices, and not in the faces as I expected. This meant I could not map the textures per face. Getting the UVWs for 1 face right meant breaking another. I ended up ‘modeling’ in 3D Studio. I needed the UVW information so I recreated the simple meshes in 3D Studio and detached some faces to give them their own vertices and UVWs. I wrote a custom script to extract all information about vertices, faces, and UVWs to simple JavaScript objects. Since I was working with 3D Studio already, this gave me the idea to use this for puzzle design as well. So I created another MaxScript to export a scene from 3D Studio to JSON. The scenes are very simple. I just add some cubes in various colors by setting the material ID. This gave me the possibility to rotate the cubes and build entire walls out of them without any problems. I included these scripts in the GitHub project as well, by the way. In case anyone wants to have a look.

Challenge 3 - Laser

I needed to find a way to calculate the laser beam. I decided to go with casting some rays. Every object has a rotation value, I used that. The first ray is cast from the transmitter object. If it hits an object I call a method on that object that returns 1 of 4 possible values: Stop processing; Go left; Go right; Hit target. I planned on extending that list with other constants but never got to that. The laserbeam repeated that process until it hit nothing, a wall or the target. Every time it hit something it adds a new coordinate to an array. The array is then used to create a tube. I ended up adding a glow effect to the tube to make it a bit more like a laser.

Challenge 4 - Controls

To make the game into a real VR game I wanted to add Oculus Rift support. I wanted to be able to rotate the blocks and move around using the Oculus Touch controls. Babylon.js has some very simple function to add interactions and teleportation. That is until you really want to do something with it. There is a mesh selection event you can use. This has a filter to limit exactly what you can select in the game. Unfortunately, this event triggers when you point your controller to the object, without even pushing one of the buttons on the controller. I ended up having to track what the user is pointing at and have the controller respond to that. Another this is that the trigger for the Oculus Touch controllers are not boolean values, but can be anything between 0 and 1. As soon as you slightly touch the trigger this value changes and the event is fired. And whenever you slightly change the amount you are touching the controller this value changes. I fixed this by adding some more status values. And although it’s working it is by far an elegant solution.

Challenge 5 - Finishing

Finishing the game turned out to be the biggest challenge. I started working on the project as soon as I could, during my summer vacation in France. I was making some progress but got struck by the flu. This took me out for a whole week. I also had to finish some presentations and a workshop. I did learn a lot though. I wasn’t planning on continuing to finish the game, but after using the game in a demo at our local WebXR NL Meetup people convinced me to continue working on it. I try to stream as much as possible on my Twitch Channel and upload it my Youtube Channel.


The main reason for me to work on compos like JS13kGames is just to have a fun project to work on and learn a lot in the process. The constraints force you to think outside the box. And although I got sick, I managed to create something playable, learn a lot and had fun.

@end3r, thank you for hosting this great compo every year! Already looking forward to the next.

Substance Painter to AFrame

I was working with on a WebVR project the other day and was trying to get a model rendered with the correct textures. I was creating the textures in Substance Painter. I was doing a back and forth between various tools to get the textured model to render correctly. At first, I was using a .obj model. But I rather would have used a .glTF model. Luckily, there’s actually a very nice way to get directly to .glTF from Substance Painter.

When you are done painting your textures, got to the file menu and look for Export Textures….

Export step 1

In the config dropdown, find glFT PBR Metal Roughness. Depending on where I need the resulting files I might lower the resolution of the textures to 512x512. When uploading you models to FaceBook you need to do this to decrease the file size.

Export step 2

Make any other configuration where needed and hit export.

When you open the resulting folder you’ll end up with files like this.

Export result

Depending on the usage you can copy these to your project. If you only need the model with textures, the .glb file is probably the one you need. This file contains the .glTF with textures in a binary format.

To use the file in A-Frame, use the <_a-gltf-model> tag. Like so:


    <a-asset-item id="art-model" src="/assets/art.glb"></a-asset-item>

  <a-gltf-model id="art" src="#art-model" position="0 2.5 -10" ></a-gltf-model>

And that’s all!

BabylonJS WebVR Hello World

In a few weeks, we have our next WebXR NL meetup. This evening we are going to put a couple of WebVR frameworks head to head: A-Frame, ThreeJS, and BabylonJS. Since I happen to have some experience with BabylonJS it is upon me to explain how to work with WebVR in BabylonJS. This post will be the first part, “Hello World”.


For this tutorial I use StackBlitz, but any other online or offline editor will work. In my case, I started a new TypeScript project in StackBlitz. This will give you an HTML file, a TS file, and a CSS file. The HTML file is the simplest. This contains only 1 element in the body of the HTML file: the Canvas element. All renderings will go to this canvas.

The CSS file is pretty straightforward as well. It makes sure the canvas element will file the entire screen.


To get BabylonJS to work we need to install a few packages. Of course BabylonJS itself, this packages also includes the TypeScript definitions.

BabylonJS needs a couple of packages, which you don’t need right away, but may become handy in the future. However, if you don’t add them, Babylon will complain.

  • Oimo => JavaScript 3D Physics engine
  • Cannon => JavaScript 3D Physics engine
  • Earcut => JavaScript triangulation library

With StackBlitz it very easy and fast to install them. Just enter the name in the ‘enter package name’. If you miss one StackBlitz will offer to install the missing package.

Main Class

I started by clearing the index.ts file with the exception of the import of the styles. I’ve added the import for BabylonJS as well. This will make sure the library is loaded and you can use it.

We need a TypeScript class for our app to run, I named it VRApp. Add an empty constructor and a function named ‘run()’. This is the basic outline of the class. After creating the class, instantiate it and call the run function.

Babylon works by having an Engine, that talks to the lower-level WebGL. You also need one or more BabylonJS Scenes. The Scene contains, for example, the geometry, lights, and camera that needs to be rendered. I created 2 private fields for these because there need to be available from different places in the class.

The engine itself is instantiated in the constructor of the VRApp class. You need to pass 2 parameters to the constructor of the BabylonJS Engine: a reference to the canvas and a bool to turn the antialiasing on. After that, we can instantiate a scene and pass it the engine. Right now, your code should like something like:

Next, we need to add a few things to the scene to render. We need a light to illuminate the scene. The first light I often create is a hemispheric light. This light has a direction. This is not the direction of the light itself, but the reflection of the light. The hemispheric light is used to create some ambient lighting in your scene. For ambient lighting in combination with other lights, you often point this up. In this case, I kept it at an angle to get some shading going.

Lighting alone won’t do anything. We need some geometry. For the ground, I create a Ground Mesh. This plane is optimized for the ground and can be used in more advanced scenarios like octrees if you wish in the future.

The rest of the scene will be made from a couple of cubes randomly scattered around. I created a simple for-loop in which I create a cube mesh and change its position to a random value.

Almost there. We need two more things. We need an implementation of the run function of the VRApp class. In this function, I provide the BabylonJS Engine I created in the beginning with a render loop. This function we provide to the engine is called every frame and is responsible for rendering the scene. This function can do more and probably will do more in the future, but for now, it only calls the render function of the scene.

At this point, you should see an error when running the application using StackBlitz.

And the error is correct. We didn’t create a camera. In a ‘normal’ WebGL application you need to create a camera, and you can do that in our case as well. But you don’t have to. Creating a WebVR project from a WebGL project takes some effort: You need to configure everything; And render a special camera. To make it as easy as possible BabylonJS has a special method that creates all of these for you and converts your application to WebVR, createDefaultVRExperience. The function creates a default VRExperienceObject. This helper will add the VR-button to the UI, checks if WebVR is available and by default creates (or replaces) the device orientation camera for you. I’ve added the following to the end of the constructor of the VRApp class:


The result of the tutorial should look something like this, the full code is in here as well:

You can open this code on StackBlitz and play with it yourself. Of course, there’s much more you can do with WebVR, but this is it for this tutorial. If you have any question feel free to add a comment or come to our meetup on the 12th of June in Eindhoven, The Netherlands.