Friday 22 June 2018

Musical Direction - part 1

Music is fun... at least, it is if you're doing it right. Since I was very, very young I have always enjoyed the thrill of giving a live performance, but it wasn't until my teenage years that I discovered the fun of composing, arranging and conducting music.

For any readers who may not know, the conductor (that person in a suit who stands in front of an Orchestra gesticulating wildly at them) has the job of directing the players' performance. With the right hand, a conductor denotes the speed at which they want the music to be played - controlling the ebb and flow of the tempo for maximum musical effect. With the left hand, they usually signal to specific players (or sections) to adjust the volume at which they are playing, or to perhaps suggest a way in which to deliver the notes (most conductors have, in my experience, made a great many strange and vaguely threatening gestures toward their ensembles by the end of their first year).

Anyone who has read my previous posts will have seen some videos of my gesture recognition system controlling some simple action. In this project, I have attempted to take it further, and make some initial forays into the world of VR music by making an interface for adjusting the placement and volume of the various instruments in one of my early compositions (I'm calling it an early composition to try and excuse how awful and repetitive it is).

As you will see, the project consists of 10 orbs - one for each instrument, each of which displays the waveform of its audio output. By physically arranging these audio-orbs, the user is able to adjust the way the performance sounds - and to familiarize themselves with each instrument's part.

This is the most foundational set-up for this system, and I will be adding more and more features to it. For example, rather than having the distance of each orb control that instrument's volume, it is possible to instead re-scale the orbs with gestures (as per my previous videos), and have the size affect the volume. This would allow us to associate distance with Reverb saturation (so that, as one might expect in real life, the farther away the orb is, the more its output reverberates and is delayed).

That is another fairly simple use-case for this project. There are a lot of ways it could be expanded and developed further - hopefully, more on that in a future post.

Anyway without further ado, here is the video of one of my pieces being performed in the app.
(I have tried to make it as interesting as possible, but feel free to skip through).





As always, thanks for reading! Stay tuned for more posts and projects!

Wednesday 4 April 2018

Multi-Hand Gesture Recognition Algorithm

Gesture recognition is a vital tool for virtual reality.

Sure, we can get by with simple touch-to-use functionality, but having fast and accurate hand gesture recognition opens a huge number of potential directions for user interface development.

By contextualizing the gestures of a single hand against either its own previous gestures, or the concurrent gestures of another hand, we open up even more options. Most of the interfaces I have developed personally rely upon contextual gestures to expose the full extent of functionality.

But how do we actually recognize and distinguish hand gestures, and how do we do it in a timely fashion? Well, aside from training an AI to do it for us (more on that in a future post), there is another way to create fairly robust and fast gesture recognition, which works across a variety of devices - and it's quite easy!

If we measure the distance between each finger tip and the palm of the hand, as well as the relative angle of the palm's normal to the camera's viewport (or the user's head, if tracked), we can classify a large number of distinct gestures.


(The red dots are the nodes, the green arrow is the normal of the palm - a vector pointing away from the palm directly).

If you point your index finger, you will notice that the distance between the tip of your extended finger and your palm is much greater than that of any other digit and your palm. This is true regardless of which finger is extended, though the precise ratio can change - normally, the palm-distance between the extended digit will be more than 1.5x times that of the other digits.

To detect a handshake, for example, we must also check the distance between the fingers and the head, to make sure that all digits are more distant than the palm (i.e. the hand is being held straight). Furthermore, we need to check the angle of the palm against the angle of the head, and make sure that the palm is pointing inwards. For example, the palm of a right hand points toward the head's left when held out for a handshake.

We can combine the two methods as well, to detect the specific orientation of a pointed finger - classifying whether the user is pointing up or down, or what direction their palm is facing as they point (which is useful for detecting some of the less polite, but still distinctive gestures).
Once you begin to identify gestures in terms of these sorts of relationships, classifying them (and creating functions to check for them) becomes quite manageable - and it doesn't rely on anything device specific. As long as your hand tracking method can tell you the positions of the fingertips, and the position and rotation of the palm, you can apply this method of gesture recognition.

The crudeness of this method also does it something of a favour - these checks can be performed very quickly and the system can be expanded or reduced as necessary, making it fairly optimizable.

As mentioned above, taking it even further we can introduce context with other gestures. In the below example, different parameters of the flock of green blobs are affected by different gesture configurations - an open, upward-facing left palm combined with a pointed right index finger opens the settings for the follow speed of the flock. By contrast, two inward-facing open palms manipulates flock cohesion, and two extended index fingers allow control of each blob's avoidance radius.



And of course we can introduce gesture order as a requirement for accessing certain functionality. For example, we can require the user to make a finger gun, then lower the hammer to fire it:



Anyway, that's an overview of how the system works. I will be releasing it on the Asset Store once I've had a chance to do some more testing.

Wednesday 8 November 2017

Side Project: Random Shaders 2

Continuing my work with shaders, I have been experimenting with a few other long-term ideas.
Some are refinements of mechanisms employed in my other shaders; others are new shaders.
So, without further rambling here is Random Shaders part 2!

1) Ultra-Violet Light (Blacklight).
My latest shader is a refinement of the dynamic colour swap shader mentioned in my last shader post.
In real blacklight, Fluorescent surfaces absorb invisible UV light and reflect it within the visible spectrum - giving objects a sort of saturated luminescence.


Whilst it is possible to fake this effect fairly easily with purple light and purposefully-recoloured textures (as in this example from the Cave of Mystery on Route 66, from Blizzard's Overwatch), I hadn't yet seen a shader which could approximate this effect dynamically.

So I made one (or I tried to). The results are fairly satisfying (though not 100% robust, yet).
Instead of responding to an invisible frequency of light, this shader lets the user specify the colour they wish to treat as Ultraviolet (dark purple by default). The user can then specify which colours should be replaced with what, when hit by light of the correct colour. There is also a version which manipulates Alpha to only be visible under that light.

This shader will be for sale on the Unity Asset store soon (ish).
In the meantime, here are some videos of the two different shaders in action. Thanks for reading!

2) Anti-Light (Not to be confused with Blacklight!)

Anti-light is part of a long-term project of mine to create cool darkness effects that require minimal animation expertise for maximum visual impact. This shader is the very first, fumbling step toward that goal.

The shader allows for up to 8 Anti-Lights (or Dark-lights), which subtract light based on distance calculations - they are essentially negative point-lights.
 The user can modify the Umbra (the solid darkness) and the Penumbra (the gradiented darkness around the edges) separately - allowing for some visual variation.
 The points are, of course, dynamic - so they can be moved / animated for particularly creepy effects.
This is only the beginning for the Anti-light shaders - I am hoping to get into some really weird shadow manipulation stuff eventually (there will be more posts on it).




And that's it - only the two for now, both of which will be for sale very soon (link will be added).
I will be posting more on the other shaders I develop (and my current long-term game project).
Thanks for reading!

Tuesday 11 July 2017

Project 2: "Only Fools and Corpses"


Only Fools and Corpses

An on-going, WIP, personal project (sadly, another project I haven't had time to work on lately), OF&C is a strategy/survival/simulation game, in which the player creates a zombie virus and infects a small town.


The goal is to design a plague that will destroy the town's inhabitants with the plague. The player gets 25 points to spend on the various attributes (there are tooltips to explain each option).

Then, the player gets to travel the town and choose an infection point. Once selected, they launch the capsule and set off the infection (and a panic).

Whilst infected, citizens can infect eachother, and may head to the hospital to get cured (the likelihood of these events is based on choices the player makes during design). Once the victims die and become zombies, they begin attacking the town (its citizens and buildings). Certain buildings provide bonuses, and certain citizens will defend themselves (there are also police who will attack the zombies - and the player if they see them throw the capsule).

The big twist of this game, however, is that the zombies can, and will, attack the player given the chance. The player needs to be clever and quick to successfully destroy the town, and progress to the next level.

As with all my projects featured here, I was the only programmer for this game. In this case, I was responsible for level design and creative direction as well. The game has a pixelated, quasi-fifties style, with black & white dialogue cards, washed-out colours, and grainy filters.

One of the biggest features is the AI, which is fairly complex - the police coordinate with eachother, but lose that advantage if the police station is destroyed - they also have panic levels, which determines how brutal their behaviour will be. The citizens have a range of panic reactions to select from (which is affected by various factors - such as base 'bravery', damage taken, number of enemies, etc). The zombies are fairly simple - they can flock together, but also react to damage taken, and get taunted by certain things as well.

There are some other neat, little features as well - such as the procedural citizen generation (skin tone, hair style, colour, face and name are all procedural). Citizens are also grouped by surname - they share homes and have more severe reactions when family members are hurt.



This project took about two months of work to get to its current state (which, as mentioned, is not quite complete).

Here is a short gameplay demo (as you can see, it is very much a WIP, but it has some interesting potential, and programming the AI was a great exercise).


I will be uploading a copy of the game so far, so you can sample it for yourselves.

Monday 10 July 2017

Side Project: Random Shaders


Side Project: Random Shaders


Shaders can be very rewarding, I think mostly because of their inherent visuality; the results are instantly visible (or not, if something has gone wrong). It can also be highly experimental - the level of documentation varies wildly, requiring a lot of hands-on learning and exploration (which can be fun too).

As part of my experimentation with shader programming, I created some utility / random shaders. Some were made to solve particular problems, others were simply experiments / exercises to develop my knowledge or test theories I had on possible shaders. Here are some of my favourites (in no particular order).

1) Dynamic Text Recolouring.

I found I had a problem with text readability; if the font's colour didn't contrast enough with the background, the text was difficult or even impossible to read.
To fix this, I made a simple shader to be applied to the text which uses a grab pass to perform a per-pixel contrast check, and then swap the text based on the background colour. This means that text is (usually) readable, regardless of the background colour.
There are some cases (contrast-noisy textures, as with bright, but normal-mapped textures), which cause some issues with this method - the text can become noisy as well. To fix this problem, I am hoping to create a compute shader version in the future - which will swap the whole text if more than 50% of its pixels requires the contrast boost.



2) Color 'Rotation'

After theorising the results, I decided to try and create a shader which would rotate the color vector (rgb) of a light source by a specifiied amount. The result is a texture which receives a color-wheel from light sources. The frequency of the colors can be adjusted for some very strange effects!


Here is the same light, but with the frequency massively increased:



3) "Grey Light"

After playing around with light sources inside material shaders, I came up with an interesting project idea - a material which would fade to greyscale based on its illumination by lightsources. The end goal being a shader which allows the user to specify a lightsource to receive this effect from - and which otherwise behaves normally.



4) Additive Alpha Removal

When creating a range overlay effect for a strategy game, I encountered a lot of problems with additive alpha blending on transparent objects. This meant that the range area markers would overlap horribly. to fix this, I created a simple shader which uses the stencil buffer to limit the alpha of all pixels to the same amount - the results have been very useful.


5) Targeted Dynamic Colour Replacement

Another exercise, this shader has actually proven useful to several artist friends of mine (or so they tell me - maybe they're just being nice). This material shader will dynamically replace all instances of a specified colour with another specified colour. The accuracy of the colour comparison can be specified for artifact removal / experimentation.


The inspector is fairly self-explanatory, but just in case, ReplaceThis is the colour to be swapped out, and Replacement is the color to swap it to. The above settings provide this output:-



Well, that's all the shaders I wanted to show off for now. Hopefully you found some of them interesting!

Project 1: Simple City Generator


Procedural, Grid-Based, City Generator


As an exercise, I decided to create a procedural city generator. It is quite rudimentary, but has some interesting elements that are worth mentioning.

The Unity-built generator works by populating an area with pre-defined (and measured) prefabs (such as specific road segments).

The first stage creates a central junction, which is then extended with randomly selected pieces - to create a random distribution of junction points along a primary 'X'.


After this has been created, the algorithm finds the intersections of all junctions and fills the road layout in accordingly - creating junctions and straight-pieces as necessary.

As you can see, at this point the generator has created a fairly effective (if a bit robotic) road layout. Naturally this creates rectangular spaces between the roads, so by using a box-packing algorithm, the spaces can be neatly populated with building prefabs.

However, in order to make the cityscape a little more interesting, the algorithm random spawns a number of "district nodes" (a point in space with a specific enumerator type attached to it). Each rectangle measures its distances to these nodes and selects from a different pool of building prefabs, based on the type of the node. In short, the city generator creates districts which have a consistent feel - allowing for the creation of residential, industrial, political or other themed sections of the city.


Here you can see the result - clustered buildings of different types forming different districts within the city. (Please excuse the white-box assets - I am not a 3D modeller and have to make do with what I can find / convince friends to make in their free time).



As a final touch, I added invisible, weighted, pathing geometry to the road segments / crossings, and added Oculus Rift support so that the player can explore the city in first person, and meet its denizens (who are designed only to spawn / navigate around the player).


Project 0: Flatliners


Flatliners was the title of my final year project at Salford University. It is a first person speed-running game, based on CSS surf maps and other community-spawned games.

Here is one of the promo gameplay trailers the team made. (The music may be quite loud!)

I was the sole programmer for this project, which was made with Unity in about 4 months.

The game itself included a tutorial and a single level to showcase the core mechanics.



Some of the other features which were interesting to develop were:

1) the Ghost/Replay system which not only recorded a playthrough, but also allowed the player to compete against their own records. It also featured some rudimentary data visualisation for saved playthroughs.

2) movement itself was quite complex on this game as the exact details and contingencies weren't finalised in the design until late on in the development cycle. It had a strange mix of inertia, fixed (and jump-based) acceleration, and very high speeds.

3) the projectile system was pretty challenging; the game was dealing with very high speeds most of the time, so there were a lot of issues with spawning bullets and detecting collisions. I solved these by performing ray-based distance checks and only spawning the projectile when the distance was sufficiently far - otherwise, it assumed a hit and spawned the explosion of the projectile immediately. (This helped improve the control fidelity of the Rocket Jumps).

4) to make the game scalable, I added a modular level system - unity asset bundles were added to a specific directory, along with a picture and a manifest, which provided in-game information about the new levels.

There were many other rewarding aspects, as well, but these were the points that stuck with me.

Despite gathering a lot of attention (the game was showcased at Insomnia54 and other events throughout the UK, and was a semi-finalist for Microsoft's Imagine Cup award), Flatliners was never released.

More information (and a demo) can be found at the IndieDB page.