OllyOllyBennett







Business Lizard





Level 0Business Lizard The Siege and the Sandfox « on: May 13, 2016, 06:45:40 AM » LATEST UPDATE: How we've improved





Hi all! Welcome to our dev log for





A shining city lies besieged in the heart of a vast desert. As the moon rises over the royal palace, a notorious thief watches the king die at a traitor’s hand. Discovered, falsely accused, and thrown into the dungeon, our thief must make their escape before the siege breaks.



Master acrobatic abilities and traverse the cavernous depths of an ancient underground prison, evade detection by your captors and discover a greater evil stirs in the sands below. Fight for your city – foil the enemy’s attempt to strike at the heart of the kingdom.



The Siege and the Sandfox is what we have termed a ‘Stealthvania’. By infusing stealth into the typical Metroidvania experience we’re creating a unique twist on the ever-popular genre.













Deep, acrobatic control system - Wall running, grappling, swinging and more

Metroidvania style open world - Gain new abilities to help you explore the ancient fortress

Stealth focused - Systemic stealth inspired by Thief and Mark of The Ninja

Narrative driven - Deep lore to be discovered through ambient story telling

Pixel Art - Authentic feeling art style, combined with UE4's modern lighting and particle systems



First Pass on character controller + movement. Includes sneaking, jumping, mantling, slides, wall jumps

First Pass camera system implemented

Tilemaps for all biomes created

Tilemap markup + collision system implemented



Twitter - @siegeandsandfox

Facebook - facebook.com/siegeandsandfox

Website - siegeandsandfox.com How we've improved Talking Animations in our game.Hi all! Welcome to our dev log for The Siege and the Sandfox . I'm Olly, external producer and PR for the project. I'm new to Tig, so will be initially copying across summaries of the logs we've already posted from the Unreal Engine forums to get caught up, but once we're up-to-date, I'll ensure that all new entries get simultaneously posted here too. In the meantime, we'd love to hear your questions and feedback. Thanks.A shining city lies besieged in the heart of a vast desert. As the moon rises over the royal palace, a notorious thief watches the king die at a traitor’s hand. Discovered, falsely accused, and thrown into the dungeon, our thief must make their escape before the siege breaks.Master acrobatic abilities and traverse the cavernous depths of an ancient underground prison, evade detection by your captors and discover a greater evil stirs in the sands below. Fight for your city – foil the enemy’s attempt to strike at the heart of the kingdom.The Siege and the Sandfox is what we have termed a ‘Stealthvania’. By infusing stealth into the typical Metroidvania experience we’re creating a unique twist on the ever-popular genre. « Last Edit: March 27, 2018, 01:32:16 AM by OllyOllyBennett » Logged Twitter | Facebook | TIG

The Siege and the Sandfox: Website

OllyOllyBennett







Business Lizard





Level 0Business Lizard Re: The Siege and the Sandfox « Reply #1 on: May 13, 2016, 07:23:03 AM » From Artist Keith - (Posted 4/4/16) -



After some internal review we decided that the lights in the game just weren't quite in keeping with the pixel art aesthetic, bit too much gradient/inner glow use and therefore too many colours so I went back and redrew the flames...



OLD - NEW

From Designer Chris - (Posted 11/4/16) -



The AI will use the AI Perception system and behaviour trees, as I've come to really like them while messing around with other projects. It may seem like overkill now, but it should pay off later when things get more complex. I've broken down my tasks into 3 sections - Navigation, perception, suspicion.



Navigation - All our tiles project out collision in 3d space, and then we can just place a nav mesh volume over this. Things started off bad, but got better by the end:





Perception - As said, we’re using the AI perception system here. It provides pretty much everything we need in a handy module, and plays nice with behaviour trees. At the moment I'm just using to check if a player can be seen in a very basic manner.



Suspicion - The suspicion system is what turns this from an action game into stealth...





This value ties to 3 states. This value decays whenever the player isn't in its sight, and the tree structure means the AI gracefully drops back down to the previous behaviour:



Not suspicious - Continue patrolling

Suspicious - stop and look in the direction of the suspicion

Pursue - Move towards the source of the suspicion

From Artist Ed - (Posted 14/4/16) -



Subtle tweaks have been made to the Torch bearing guard's Attack animation, adding a more recognizable tail to the flame as it follows the swing of the arm. Proof (if it was ever needed) that Leonardo da Vinci was correct: "Art is never finished, only abandoned." We are not abandoning this yet though!





From Artist Keith - (Posted 21/4/16) -



We're trying to be more faithful to the 16 bit era by using less, if not any, obvious transparent effects. This meant ditching the smoke effects we have and trying to come up with something more in keeping with our aesthetic. Ribbon particles seemed to be a cool way forward and so I have thrown something together and am quite happy with the result for now.









We are rolling with particle effects still as hand-animating everything is going to take a very long time and has the added problem of always looking exactly the same, the chaos of a particle system is much more natural. We could get around that a bit by giving each instance a random start frame so if we have multiple torches in one scene, they aren't all flickering in time with each other!



Other items to receive a bit of a buff are the doors. We felt they looked a bit too slim, unreadable to the player and when open didn't have any sense of depth as they kept the same 'lighting' as they opened or shut. At 8 frames, opening and shutting also felt a bit snappy when in reality, our hero would probably open and shut doors with a bit of care so as not to be discovered.



OLD - NEW

From Designer Chris - (Posted 23/4/16) -



I managed to spend a bit of time working on the AI today. I had to make some changes to some related systems to allow everything to run properly in simulation mode. A few things relied on there being a controlled pawn within the world, so this was a good chance to clean that up. I also set-up some basic way points to allow the AI to patrol:





You can also see what I like to call the 'perception biscuit' as part of the gameplay debugging tool. As it's designed primarily for 3D games it's of limited use from the perspective, and the following image might give you a better idea of what's going on:





From Artist Keith - (Posted 29/4/16) -



With us looking to build in a nice tutorial area for preliminary outside play testing, I decided to have a go at blueprinting out dynamic control GUI elements that would fade up and down based on player proximity as well as be able to switch out sprites depending on platform being used (such as Xbox, PS4 or PC). I'm a pixel pusher by trade and logic always makes me cry but I was quite surprised how easy it was for me to thump this out and I was quite pleased with the results.





In other developments, I have always been a bit miffed that the dawn and sunset images worked so well and were far more interesting to look at than the night scene was; it came off as far more flat and dull. With this in mind, I went back to it and decided to give it a new lick of paint.



- (Posted 4/4/16) - Original Post After some internal review we decided that the lights in the game just weren't quite in keeping with the pixel art aesthetic, bit too much gradient/inner glow use and therefore too many colours so I went back and redrew the flames...- (Posted 11/4/16) - Original Post The AI will use the AI Perception system and behaviour trees, as I've come to really like them while messing around with other projects. It may seem like overkill now, but it should pay off later when things get more complex. I've broken down my tasks into 3 sections - Navigation, perception, suspicion.- All our tiles project out collision in 3d space, and then we can just place a nav mesh volume over this. Things started off bad, but got better by the end:- As said, we’re using the AI perception system here. It provides pretty much everything we need in a handy module, and plays nice with behaviour trees. At the moment I'm just using to check if a player can be seen in a very basic manner.- The suspicion system is what turns this from an action game into stealth...This value ties to 3 states. This value decays whenever the player isn't in its sight, and the tree structure means the AI gracefully drops back down to the previous behaviour:- (Posted 14/4/16) - Original Post Subtle tweaks have been made to the Torch bearing guard's Attack animation, adding a more recognizable tail to the flame as it follows the swing of the arm. Proof (if it was ever needed) that Leonardo da Vinci was correct: "Art is never finished, only abandoned." We are not abandoning this yet though!- (Posted 21/4/16) - Original Post We're trying to be more faithful to the 16 bit era by using less, if not any, obvious transparent effects. This meant ditching the smoke effects we have and trying to come up with something more in keeping with our aesthetic. Ribbon particles seemed to be a cool way forward and so I have thrown something together and am quite happy with the result for now.We are rolling with particle effects still as hand-animating everything is going to take a very long time and has the added problem of always looking exactly the same, the chaos of a particle system is much more natural. We could get around that a bit by giving each instance a random start frame so if we have multiple torches in one scene, they aren't all flickering in time with each other!Other items to receive a bit of a buff are the doors. We felt they looked a bit too slim, unreadable to the player and when open didn't have any sense of depth as they kept the same 'lighting' as they opened or shut. At 8 frames, opening and shutting also felt a bit snappy when in reality, our hero would probably open and shut doors with a bit of care so as not to be discovered.- (Posted 23/4/16) - Original Post I managed to spend a bit of time working on the AI today. I had to make some changes to some related systems to allow everything to run properly in simulation mode. A few things relied on there being a controlled pawn within the world, so this was a good chance to clean that up. I also set-up some basic way points to allow the AI to patrol:You can also see what I like to call the 'perception biscuit' as part of the gameplay debugging tool. As it's designed primarily for 3D games it's of limited use from the perspective, and the following image might give you a better idea of what's going on:- (Posted 29/4/16) - Original Post With us looking to build in a nice tutorial area for preliminary outside play testing, I decided to have a go at blueprinting out dynamic control GUI elements that would fade up and down based on player proximity as well as be able to switch out sprites depending on platform being used (such as Xbox, PS4 or PC). I'm a pixel pusher by trade and logic always makes me cry but I was quite surprised how easy it was for me to thump this out and I was quite pleased with the results.In other developments, I have always been a bit miffed that the dawn and sunset images worked so well and were far more interesting to look at than the night scene was; it came off as far more flat and dull. With this in mind, I went back to it and decided to give it a new lick of paint. « Last Edit: May 17, 2016, 02:11:14 AM by OllyOllyBennett » Logged Twitter | Facebook | TIG

The Siege and the Sandfox: Website

crwilso









Level 0 Re: The Siege and the Sandfox « Reply #3 on: May 23, 2016, 07:00:03 AM »



Designer here o/



My task over the last few weeks is getting the AI up and running, so here's an update on my progress so far:



Creating a workable perception system for a 2D game in unreal has proved a much bigger challenge than we had expected. However, despite the initial problems we’ve managed to come up with something perfect for our needs. One of our coders, Rex, did a great job in creating us a bespoke sense configuration for 2D.







What we now have is a collection of several 2D cones, which give a much better representation of the AI’s sight. We’ve also offset the position of the visibility raycast to go from eye to eye, rather than the centre of each character.



These two changes have made a massive difference to how it functions, and give us much more room to add interesting gameplay features. There’s plenty of tweaking ahead but by and large the AI sees you when you think it should, and hiding from it is also much more intuitive than before.



Speaking of tweaking these values, Rex also implemented a new debug to replace Epic’s own Gameplay debugger for our purposes. It easily lets us know which cone/s we are in, and if the AI is successfully able to raycast to the player.



EQS



Unreal Engine has a really powerful tool known as the ‘Environment Query System’. This system allows you to ask questions about the environment around you, and filter it down into useable bits of data. Using this hugely powerful tool for a 2D game is obviously overkill, but the key advantage is that they are very easy to create and test just using blueprint and the in-game editor. My aim wherever possible is for everything to remain completely readable by anyone on the team, so they can make their own improvements and suggestions.



At the moment I’m only implementing a few simple tests, for example: ‘find the nearest waypoint I can successfully plot a path to’ and ‘pick a random spot near to where I last saw the player’. Here's an example of picking a nearby valid point (the blue points are discarded as the AI can't reach them):







I’m looking forward to expanding these in the future using a few simple tricks to help the AI make smarter decisions without cheating too much. For example, when I create a query to pick a random spot near where the AI last saw the player, I can weight the results towards where the player actually is (even if they are hidden!).



Last Known Position



Another important aspect of our perception system is the concept of ‘last known position’, which I’ll refer to as LKP from now on to preserve my sanity!



Unreal’s perception system has it’s own concept of LKP, but we aren’t currently using it just yet. My simplified version positions an object whenever the player leaves the view cones. This position is then used as a start point for a search, should the AI reach the point and still not get a visual on the player.



Having this LKP object also allows me to deploy another classic stealth AI cheat which I like to refer to as ‘6th sense’.



Imagine a situation where I pass through an AI’s view cones heading in a direction it can’t see. How do I make the AI seem smart and know which way the player has gone? Sure, I could make it super complicated by using things like Dead Reckoning combined with multiple EQS to decide which cover the player is likely to be in. This is the sort of thing you’d find in Crysis or Halo, and as such is somewhat beyond our scope as a 2D game.



Instead, the LKP is updated for a short time (~0.5 seconds) after the AI has technically lost sight of them. From the player's perspective, this usually just looks like intuition, and would only look like cheating if the time is too long. As with most things in life, this is best explained with a gif.







The green tick in the cross hair is my LKP. See how it continues to update even after the player character leaves the view cones.





Suspicion



Suspicion is what we use to determine what the AI thinks is worth investigating, and later chasing after. The rate the suspicion increases is determined by whichever cone the player character is currently in. If they are in more than one cone, the cone with the higher rate of suspicion is used.



Currently we have 3 cones:



Close Cone - Instant max suspicion. If the AI sees you here, it will chase immediately.

Mid Cone - Average level increase.

Far Cone - Small increase.



Note that currently this suspicion system doesn’t take into account how well lit the player is. I’ll try adding a modifier based on that during the next pass as right now I just feel it’d muddy the waters while we test out the basics.



I also added some code to cover colliding with the the AI. Spoilers - they don’t like it very much.



Behavior Tree



All these new features mean there’s a lot more going on in the Behavior Tree now. I think it’s time to start splitting these into separate behaviors . . .







There are a few other issues that have started to appear, now things are a bit more complicated - I have quite a few areas where I quite harshly abort sequences if things change (like for example the player leaves the AI’s sight) and these now cause visible hitching as the AI flip flops between two branches. I’ll need to take a step back and rethink some of these longer sequences and find some more graceful break points.



So, that’s all for now! What’s next?





Hooking up all the animations

Moving the view cone around (looking up and down, moving up and down slopes etc)

Adding some placeholder sounds

Lots of tweaking of the ‘magic numbers’ until it feels good



Thanks for reading! I’d be happy to hear any feedback if people have any advice or thoughts to offer here. Hey folks,Designer here o/My task over the last few weeks is getting the AI up and running, so here's an update on my progress so far:Creating a workable perception system for a 2D game in unreal has proved a much bigger challenge than we had expected. However, despite the initial problems we’ve managed to come up with something perfect for our needs. One of our coders, Rex, did a great job in creating us a bespoke sense configuration for 2D.What we now have is a collection of several 2D cones, which give a much better representation of the AI’s sight. We’ve also offset the position of the visibility raycast to go from eye to eye, rather than the centre of each character.These two changes have made a massive difference to how it functions, and give us much more room to add interesting gameplay features. There’s plenty of tweaking ahead but by and large the AI sees you when you think it should, and hiding from it is also much more intuitive than before.Speaking of tweaking these values, Rex also implemented a new debug to replace Epic’s own Gameplay debugger for our purposes. It easily lets us know which cone/s we are in, and if the AI is successfully able to raycast to the player.Unreal Engine has a really powerful tool known as the ‘Environment Query System’. This system allows you to ask questions about the environment around you, and filter it down into useable bits of data. Using this hugely powerful tool for a 2D game is obviously overkill, but the key advantage is that they are very easy to create and test just using blueprint and the in-game editor. My aim wherever possible is for everything to remain completely readable by anyone on the team, so they can make their own improvements and suggestions.At the moment I’m only implementing a few simple tests, for example: ‘find the nearest waypoint I can successfully plot a path to’ and ‘pick a random spot near to where I last saw the player’. Here's an example of picking a nearby valid point (the blue points are discarded as the AI can't reach them):I’m looking forward to expanding these in the future using a few simple tricks to help the AI make smarter decisions without cheating too much. For example, when I create a query to pick a random spot near where the AI last saw the player, I can weight the results towards where the player actually is (even if they are hidden!).Another important aspect of our perception system is the concept of ‘last known position’, which I’ll refer to as LKP from now on to preserve my sanity!Unreal’s perception system has it’s own concept of LKP, but we aren’t currently using it just yet. My simplified version positions an object whenever the player leaves the view cones. This position is then used as a start point for a search, should the AI reach the point and still not get a visual on the player.Having this LKP object also allows me to deploy another classic stealth AI cheat which I like to refer to as ‘6th sense’.Imagine a situation where I pass through an AI’s view cones heading in a direction it can’t see. How do I make the AI seem smart and know which way the player has gone? Sure, I could make it super complicated by using things like Dead Reckoning combined with multiple EQS to decide which cover the player is likely to be in. This is the sort of thing you’d find in Crysis or Halo, and as such is somewhat beyond our scope as a 2D game.Instead, the LKP is updated for a short time (~0.5 seconds) after the AI has technically lost sight of them. From the player's perspective, this usually just looks like intuition, and would only look like cheating if the time is too long. As with most things in life, this is best explained with a gif.The green tick in the cross hair is my LKP. See how it continues to update even after the player character leaves the view cones.Suspicion is what we use to determine what the AI thinks is worth investigating, and later chasing after. The rate the suspicion increases is determined by whichever cone the player character is currently in. If they are in more than one cone, the cone with the higher rate of suspicion is used.Currently we have 3 cones:Close Cone - Instant max suspicion. If the AI sees you here, it will chase immediately.Mid Cone - Average level increase.Far Cone - Small increase.Note that currently this suspicion system doesn’t take into account how well lit the player is. I’ll try adding a modifier based on that during the next pass as right now I just feel it’d muddy the waters while we test out the basics.I also added some code to cover colliding with the the AI. Spoilers - they don’t like it very much.All these new features mean there’s a lot more going on in the Behavior Tree now. I think it’s time to start splitting these into separate behaviors . . .There are a few other issues that have started to appear, now things are a bit more complicated - I have quite a few areas where I quite harshly abort sequences if things change (like for example the player leaves the AI’s sight) and these now cause visible hitching as the AI flip flops between two branches. I’ll need to take a step back and rethink some of these longer sequences and find some more graceful break points.So, that’s all for now! What’s next?Thanks for reading! I’d be happy to hear any feedback if people have any advice or thoughts to offer here. Logged

crwilso









Level 0 Re: The Siege and the Sandfox « Reply #7 on: May 23, 2016, 01:54:16 PM »



Thanks for taking a look. 2D in Unreal is a challenge, but there's plenty of rewards once you make it through the tricky parts. Feel free to ask anything about what we've done here - us 2D unreal guys have got to stick together Hey,Thanks for taking a look. 2D in Unreal is a challenge, but there's plenty of rewards once you make it through the tricky parts. Feel free to ask anything about what we've done here - us 2D unreal guys have got to stick together Logged

Michael Klier









Level 1 Re: The Siege and the Sandfox « Reply #10 on: May 24, 2016, 11:05:42 PM » Quote from: crwilso on May 23, 2016, 01:54:16 PM 2D in Unreal is a challenge, but there's plenty of rewards once you make it through the tricky parts.



Hey there, any chance of a little write-up on those "tricky" parts ? I'm currently in the process of figuring out which engine to use for my own 2D projects and had a brief look at Paper2D & UE4, even though it's a beast, it's still on the table because of the blueprint system.



Great art & visuals btw! Looking forward to see this develop further. I'm a big Mark Of the Ninja fan. Hey there, any chance of a little write-up on those "tricky" parts? I'm currently in the process of figuring out which engine to use for my own 2D projects and had a brief look at Paper2D & UE4, even though it's a beast, it's still on the table because of the blueprint system.Great art & visuals btw! Looking forward to see this develop further. I'm a big Mark Of the Ninja fan. Logged

Reel Twitter

Working on



Sound Design, Audio Implementation, MusicWorking on

Boxy









Level 0 Where Does He Get Those Wonderful Toys « Reply #11 on: May 24, 2016, 11:52:49 PM »



Quote from: Tuba on May 24, 2016, 11:27:09 AM

May I ask you guys how are you handling the lighting with the 2D sprites? Are you using anything special? Cause I've been having a lot of trouble with that

What trouble have you experienced Tuba? We aren't doing anything particularly out of the box when it comes to the lighting in Sandfox right now. All the lighting, materials and rendering is the standard Unreal Engine toolset, no special modifications or changes.



We use, for the most part, lit masked sprite materials. There is no specular, roughness or normal maps, just standard diffuse maps and opacity masks. There are exceptions that may use unlit and a few rare translucency uses but the game is probably 95% lit masked right now. The sprites are mostly drawn in a fairly standard practice way, we opted for a universal "light source" that's above everything and slightly to the right. This allows us to pick out details on the characters and the environment, even though they are in the shadows quite a lot.



Characters and the foreground environment is generally drawn much brighter and vibrant to help it stand out with "top down" lighting, the background is drawn much darker and desaturated with a generic, almost ambient occlusion-esque "front lit" look that perhaps helps the world look dynamically lit when really it isn't.



One thing we did decide to do that required a little know how/experimentation was make a pixel colour shift on characters when they step in and out of lights, via a lookup table. Again, this is made completely in standard Unreal, I knocked up a material that could read in a Paper 2D sprite, check pixel colour values and switch them based on a Light or Dark LUT and re-colour the characters in realtime dependant on whether they are in a light source or not. This is a subtle addition to the game but really helps sell where the characters are in terms of light levels, kind of important in a stealth game.



I hope that helps, I maybe went on a bit there but if anyone has any questions or criticism I'm more than happy to reply as best as I can. Hey all, one of the pixel pusher art types here on Sandfox, thanks for all your kind words, it's nice to know we are hopefully doing something people might enjoy!What trouble have you experienced Tuba? We aren't doing anything particularly out of the box when it comes to the lighting in Sandfox right now. All the lighting, materials and rendering is the standard Unreal Engine toolset, no special modifications or changes.We use, for the most part, lit masked sprite materials. There is no specular, roughness or normal maps, just standard diffuse maps and opacity masks. There are exceptions that may use unlit and a few rare translucency uses but the game is probably 95% lit masked right now. The sprites are mostly drawn in a fairly standard practice way, we opted for a universal "light source" that's above everything and slightly to the right. This allows us to pick out details on the characters and the environment, even though they are in the shadows quite a lot.Characters and the foreground environment is generally drawn much brighter and vibrant to help it stand out with "top down" lighting, the background is drawn much darker and desaturated with a generic, almost ambient occlusion-esque "front lit" look that perhaps helps the world look dynamically lit when really it isn't.One thing we did decide to do that required a little know how/experimentation was make a pixel colour shift on characters when they step in and out of lights, via a lookup table. Again, this is made completely in standard Unreal, I knocked up a material that could read in a Paper 2D sprite, check pixel colour values and switch them based on a Light or Dark LUT and re-colour the characters in realtime dependant on whether they are in a light source or not. This is a subtle addition to the game but really helps sell where the characters are in terms of light levels, kind of important in a stealth game. We tweeted an early test video of it if you wanted to see, the video is not the best quality thanks to Twitter limits, maybe we'll rustle up a GIF sometime if it isn't huge.I hope that helps, I maybe went on a bit there but if anyone has any questions or criticism I'm more than happy to reply as best as I can. Logged

Zorg









Level 9 Re: The Siege and the Sandfox « Reply #13 on: May 25, 2016, 12:32:29 AM » *scnr*



You project looks very nice, but the big eyes are scaring me a little, maybe because they lack highlights. I'm only seeing one big pupil (on my non-calibrated screen). *scnr*You project looks very nice, but the big eyes are scaring me a little, maybe because they lack highlights. I'm only seeing one big pupil (on my non-calibrated screen). Logged

crwilso









Level 0 Re: The Siege and the Sandfox « Reply #14 on: May 25, 2016, 02:30:06 AM » Quote from: Michael Klier on May 24, 2016, 11:05:42 PM Quote from: crwilso on May 23, 2016, 01:54:16 PM 2D in Unreal is a challenge, but there's plenty of rewards once you make it through the tricky parts.



Hey there, any chance of a little write-up on those "tricky" parts ? I'm currently in the process of figuring out which engine to use for my own 2D projects and had a brief look at Paper2D & UE4, even though it's a beast, it's still on the table because of the blueprint system.



Great art & visuals btw! Looking forward to see this develop further. I'm a big Mark Of the Ninja fan.

Hey there, any chance of a little write-up on those "tricky" parts? I'm currently in the process of figuring out which engine to use for my own 2D projects and had a brief look at Paper2D & UE4, even though it's a beast, it's still on the table because of the blueprint system.Great art & visuals btw! Looking forward to see this develop further. I'm a big Mark Of the Ninja fan.

That's what I intend to focus these log posts on, and hopefully some short videos in the future. So far the AI stuff I've outlined has had it's challenges - rolling our own sense configuration being one example. The out of the box senses are great for 3D games, a bit more problematic for us. If there's anything specific you're concerned about it, let me know. That's what I intend to focus these log posts on, and hopefully some short videos in the future. So far the AI stuff I've outlined has had it's challenges - rolling our own sense configuration being one example. The out of the box senses are great for 3D games, a bit more problematic for us. If there's anything specific you're concerned about it, let me know. Logged

Boxy









Level 0 Everything will distort, everything will be unquantifiable « Reply #15 on: June 20, 2016, 05:01:36 AM »





Refractive Particles In Editor | Refractive Particles In Game Refractive Particles In Editor | Refractive Particles In Game

We carried on this way for a while, knowing of the problem and assuming that the orthographic camera might get an update at some point to support more of the effects that Unreal is capable of rendering but after a while it became clear a different approach might be needed. As we don't actually NEED physically correct refraction, we just want to have the cool effect of heat haze or water visually appearing to distort the world behind it, refraction is actually overkill for our needs anyway. We got to thinking all we really need is to be able to distort the final scene texture and apply it to the scene with bounds of our choosing, either by particle effect or sprites. I started looking through post process effects and how they are applied to the screen (the



By using the





With this hurdle overcome, we can now have the fire haze we wanted, along with distortion when walking behind waterfalls or pools of water within the game. These effects are all ongoing, work in progress (and some are quite subtle in these small potato GIF's) but I figured the post might be useful to budding 2D Unreal developers. As usual, any criticism or feedback is appreciated.



We had been using refractive elements within the game from pretty much near the start of the project for the easy "+10 points" that refraction/distortion gives any game. The only problem was whilst the effects rendered great whilst in the editor (using a perspective based camera) when we moved to the orthographic camera in gameplay, any effects based on refraction were simply not rendered.We carried on this way for a while, knowing of the problem and assuming that the orthographic camera might get an update at some point to support more of the effects that Unreal is capable of rendering but after a while it became clear a different approach might be needed. As we don't actually NEED physically correct refraction, we just want to have the cool effect of heat haze or water visually appearing to distort the world behind it, refraction is actually overkill for our needs anyway. We got to thinking all we really need is to be able to distort the final scene texture and apply it to the scene with bounds of our choosing, either by particle effect or sprites. I started looking through post process effects and how they are applied to the screen (the Unreal Wiki came in very useful here) looking to the custom depth pass to mask out areas of the scene. This quickly felt like too much for the effect we wanted, a bit overkill, and I felt there must be a simpler way to do it. There was!By using the ScreenPosition node and distorting the scene texture via noise maps that are also panned and rotated on top of each other at mixed scales, we get a distorted, haze effect across the screen. We now only use this material on sprites or particles where we need the effect to happen and we quickly and easily get the intended effect on a orthographic camera in game, bringing our fires and watery effects to life, making the world feel more dynamic. The material I made up looks something like this if anyone wanted to achieve something similar in their 2D games, the premise can be carried across to any material type, particle, sprite or post process.With this hurdle overcome, we can now have the fire haze we wanted, along with distortion when walking behind waterfalls or pools of water within the game. These effects are all ongoing, work in progress (and some are quite subtle in these small potato GIF's) but I figured the post might be useful to budding 2D Unreal developers. As usual, any criticism or feedback is appreciated. Logged

OllyOllyBennett







Business Lizard





Level 0Business Lizard Re: The Siege and the Sandfox « Reply #17 on: August 31, 2016, 10:38:28 AM »



Our designer Chris is currently split between two tasks - AI improvements that improve our stealth gameplay, and fleshing out the outer edges of the world map.



World Map





We've been working on making rooftop sections at the very top of the map. This is an area we're sure will get lots of iteration, because we’d like a player who’s really mastered the moveset to be able to get across here in one fluid motion.



To start with, Chris is just blocking out the space, seeing what feels right, and how to sensibly manage the height change between the two neighbouring sections. Currently it feels pretty good to climb the towers internally, but not so good from the outside. We'll be focusing on breaking the outline up a little and trying to add some more interest as you travel between them.



While doing this, we've also started adding to the wishlist of future player controller improvements. Things like being able to interrupt some animations, and some ‘feel good’ things like caching of jump inputs, and allowing a little tolerance when making jumps off ledges.



We've also started to prototype some of the movement features that’ll be unlocked during the course of the game, but we’ll save that for another update



AI



Meanwhile, over in AI town we've been trying to work out the best way to approach doors.



The current implementation uses nav link proxies, and actually works surprisingly well - kudos to one of our coders, Rex, for digging into these because Chris was having difficulty understanding how to make them do anything useful!



However, we're trying to take a pretty tough stance, that as much decision making as possible for the AI should happen as behaviours within the behaviour tree. Currently, the use of nav link proxies breaks this rule pretty fundamentally. From the behaviour tree perspective, the AI just uses ‘move to’ and either reaches it’s final destination, using doors along the way, or doesn’t make it.



We're pushing for a solution that lets us setup lots of fall backs in the tree to allow for things like unlocking doors, breaking doors, and anything interesting we might come up with in the future. We've been heavily inspired by the examples given in this



To that end, we’ve mocked up our own version using placeholder nodes in a behavior tree:





Well, that’s all for now, we’ll provide another update when some of these things are implemented. Fingers crossed it works as well as it does in our heads! We've been focusing back on full-time development following Develop and Radius, as well as working on a few TOP SECRET things that will be shared when the time is right.Our designer Chris is currently split between two tasks - AI improvements that improve our stealth gameplay, and fleshing out the outer edges of the world map.We've been working on making rooftop sections at the very top of the map. This is an area we're sure will get lots of iteration, because we’d like a player who’s really mastered the moveset to be able to get across here in one fluid motion.To start with, Chris is just blocking out the space, seeing what feels right, and how to sensibly manage the height change between the two neighbouring sections. Currently it feels pretty good to climb the towers internally, but not so good from the outside. We'll be focusing on breaking the outline up a little and trying to add some more interest as you travel between them.While doing this, we've also started adding to the wishlist of future player controller improvements. Things like being able to interrupt some animations, and some ‘feel good’ things like caching of jump inputs, and allowing a little tolerance when making jumps off ledges.We've also started to prototype some of the movement features that’ll be unlocked during the course of the game, but we’ll save that for another updateMeanwhile, over in AI town we've been trying to work out the best way to approach doors.The current implementation uses nav link proxies, and actually works surprisingly well - kudos to one of our coders, Rex, for digging into these because Chris was having difficulty understanding how to make them do anything useful!However, we're trying to take a pretty tough stance, that as much decision making as possible for the AI should happen as behaviours within the behaviour tree. Currently, the use of nav link proxies breaks this rule pretty fundamentally. From the behaviour tree perspective, the AI just uses ‘move to’ and either reaches it’s final destination, using doors along the way, or doesn’t make it.We're pushing for a solution that lets us setup lots of fall backs in the tree to allow for things like unlocking doors, breaking doors, and anything interesting we might come up with in the future. We've been heavily inspired by the examples given in this Gamasutra article.To that end, we’ve mocked up our own version using placeholder nodes in a behavior tree:Well, that’s all for now, we’ll provide another update when some of these things are implemented. Fingers crossed it works as well as it does in our heads! Logged Twitter | Facebook | TIG

The Siege and the Sandfox: Website