## Archive for May, 2013

### Ideas as to things I should probably look into doing next

May 27, 2013

Hi folks,

In addition to some of the things mentioned in the last post, I’ve had a bit of a further play, and have identified to a better degree as to what I would like to try to implement next.

Namely, I have been having a look into MecAnim to a greater or lesser extent – in particular finding this tutorial quite useful.  I’ve since accumulated a bit of a library of basic animations, and a few models, old and new.  In addition to this, I’ve been looking into patrol pathing for the creatures in question, sandboxing this work within a new scene.

Essentially what I would like to achieve is to have a group of friendlies – guards, on a set patrol path, and a group of enemies – skeletons, also on a set patrol path (inspired by this resource). When the two come within sight radius of each other, I want them to close for melee combat.  Upon attack, I would like the other creature to sense this and also join combat.  Creatures will have to face in the correct direction and be within a certain range for combat to be permissible.  In addition, I would like each creature to have a set number of hit points.  When this reaches zero, I would like an appropriate animation to play – then a moderate delay prior to destroying the prefab.  Then I would like a replacement enemy / guard to spawn and start walking along the original patrol path until it sees an enemy / guard, then charges to initiate combat, etc.

In terms of animations, I would like a walking animation to play while the creatures are walking, maybe a pause and idle animation at each waypoint, or certain specific waypoints, and if an enemy comes into sight radius / arc I would like the model to charge until within melee distance – at which point it is to stop and play an attack animation.  For a more convincing experience, I might also like to build in a “morale” stat, which determines at what health % the enemy / guard is likely to break and run in a random direction, at reduced speed.

I would also like the player to be able to join combat on one side or the other, most likely in favour of the guards.

For a bonus feature, I would like to have a button on the player HUD that allowed the following effect to occur – random spawning of large boulders falling from the sky, at say 5 or 6 (or maybe 10?) fixed and predetermined spawn points for the purpose.  ie so that at every predetermined time interval (allowing for delta time so that the implementation is hardware agnostic), a random number (say 1 to 3) spawn points are randomly selected.  Prior to impact I would like there to be an obvious red glowing marker at the location that the boulder will fall.  Then I would like the boulder to spawn and fall at some predetermined speed, and autocrush any avatars beneath, forcing respawn.  Subsequent to impact I would like the boulder to decay after some period of time, maybe with a lifetime of 1 minute  (In the intervening time I would like it to be treated as a wall for pathing purposes, as according to the particular AI script I will be using).

I would then be interested in implementing a dive and roll animation into the player controls, and possibly introduce a stat “reactions”/”reflex” to guards / skeletons, which make it more likely that they will detect imminent impact and also dive out of the way, even if they are in combat.  I would also like to be able to switch this effect off from the player HUD at will.

Later, time and resources permitting, ideally I would like to introduce missile combat animations (bow and arrow, and/or crossbow, sling, etc), and magic combat animations, to allow for guard / skeleton variants that can engage in combat at range.

So that’s my animation plans.  For the time being, as mentioned above, I intend to sandbox this activity, since converting the work to something that is synchronised in a SmartFoxServer might make things harder to build initially; I can do the legwork for that later.

The second thing I am interested in doing, and will probably take much longer to figure out, is to determine how to integrate a MySQL database with a SmartFoxServer application.  The ultimate goal of this will be to implement an architecture wherein I can store player specific login names, passwords (so, for instance if this was a licensed game, they would need to transact, or at least register to obtain a name and password), together with a table containing the set of all that player’s characters and their names, together with a table that, for each character name (uniquely specified for the server (for chat considerations)), contains the information as to health, stats, and inventory.  Inventory would be a string that contained a list of words, eg, potion, wand, leather armour, broadsword, that would presumably be read by the SFS extension jar file and allow the server to transmit to the client the data that relates to the items they have.  That sort of idea, anyhow – so that the data, related to persistence of character state, for the collection of characters each player has, is stored on the server.

So that’s the goal.  I do not expect implementation to be easy, particularly since documentation seems relatively scarce regarding such.  However, this post looks like a good place to start on said matter.  In particular, it is indicated how one might set up a custom SmartFoxServer via the RightScale platform if one wanted to deploy the server in the cloud, say on Amazon EC2, via the 12th point of the recipe (one can directly modify files while booting up/setting up a virtual machine on RightScale).

### Game Development #4 – Shop, basic dialogue, and game state persistence

May 26, 2013

I’ve made some further progress with my unity game.  Here is the latest demonstration video:

Previous videos in the series, in reverse order of appearance:

Bridge Battle

Animation and multiplayer demonstration

Transmission of player location via smartfoxserver

In this video, I demonstrate essentially integration of some functionality from a package from the Unity asset store into my game.  The features added are

• an action HUD, with “examine”, “use”, “talk”, and “focus” as allowable actions.
• the ability to shop a store
• basic dialogue (managed as a state machine) – together with the option to switch from english to afrikaans
• game state persistence
• basic inventory and ability to equip items

The things that I would like to look into next would be, in no particular order:

• The option to choose an avatar (or save game) on entry.  Perhaps allow multiple avatars per player, but have persistence of their state (like in a standard MMO).  Allow for different characters to be played (ie, different avatar appearance) but synchronise the chosen avatar across smartfoxserver so that all players see the same thing; this could be done, for instance, by indexing a set of prefabs which was the allowable avatar form list.  Use MecAnim to standardise animations across all avatars, so that there is no ambiguity when it comes to making a call to a player animation in order to do something in particular (eg walk, strafe, jump, run, fight (melee), fight (archery), fight (magic) ).  [This is in fact a great strength of the current incarnation of Unity, the ability to essentially map animations to different character frames]
• NPC guards on a patrol path.  I thought I might be able to use Rain{Indie} for this, although that might be slightly over the top.
• A better inventory system.  Allowing one to have object representations for items and mouseover text, if possible.
• A character representation for item placement.  Allowing one to drag and drop items to particular locations on the character’s frame, or, alternatively, to be able to select an inventory slot and equip / unequip whatever is available in character inventory.
• Monsters vs NPCs, with spawn points for both – for an ongoing pitched battle.

### Two excellent pieces of news

May 22, 2013

Hi folks,

I’m afraid this is not another content post, but the following items may be of interest.

The first is that Unity is now allowing developers who use their software to deploy not only to standalone, and to the Unity webplayer, but now also to Android and Apple mobile platforms for free.  In order to do this, merely open up Unity, open the “Unity” tab, click “Manage License” and then select “Check for Updates”.  Your license will be updated automatically, and mobile unlocked.  Via this post.

Another interesting development – you may recall that earlier I wrote about dotcloud’s amazing opensource release of Docker.  Now the same chaps have made it even easier to independently deploy “sandbox” style applications on same, via their release of a new project on their GitHub pageSandbox.  The original blog post describing this release is located here.

I have not yet investigated this wonderful new tool, but if you, the reader, are interested in checking it out, the installation instructions, both for docker and sandbox, can be found at this location.

### Convolution reverb and audio editing

May 11, 2013

Audio is not really something that is my key interest or passion, but I have heard from someone who knows, that when it comes to sound editing, or studio recording, reverb is an extremely useful effect to add to a particular line of the track.  (By line, I mean singular audio input; multiple audio inputs would obviously come from the different instruments involved).  There are other important effects too, that interest sound engineers – here is a list of the most commonly used, but reverb is the most important, since it provides the impression that the instruments are playing somewhere.

Reverb is useful for completely digital music, music recorded in a studio (where the room is designed so that there are no reflections), or some mix of the two.

It is also a fairly lucrative business to get into.  ValhallaRoom, one of the leading amateur plugins, has sold hundreds of thousands of copies; and other commercial software of this flavour can cost thousands of dollars per license.

So what is reverb, and how does it work?

Most generally, a room or space will have a characteristic wave equation associated to it: $(\Delta - \partial^{2}_{t})f = 0$, with particular boundary conditions determined by the geometry.  If we apply a forcing term to this equation $(\Delta - \partial^{2}_{t})f(\vec{x},t) = h(\vec{x},t)$, then we essentially simulate the addition of sound sources to the room.  This is the general situation we wish to solve for; given an input signal (h) what is the audio response (f) ?

HERE THERE BE GREENS FUNCTIONS

It turns out that this problem is soluble, if one introduces the concept of Green’s function, which in audio speak / signal processing lingo (for this particular PDE) is also known as the impulse response .

We solve for the equation $Lf := (\Delta - \partial^{2}_{t})f(\vec{x},t) = \delta(\vec{x} - \vec{y},t - \tau)$ instead.  Various techniques can then be used; separation of variables is probably best from this point, if one observes $\delta(\vec{x} - \vec{y},t - \tau) = \delta(\vec{x} - \vec{y})\delta(t - \tau)$; the response of the system to a point source of strength one at location $(\vec{y}, \tau)$ .  It is then, with a bit of work, possible to determinate a solution $G(\vec{x} - \vec{y},t - \tau)$ to the above equation; the Green’s function solution.

Then, with a bit more work, if we observe that

$h(\vec{x},t) = \int_{\vec{y},\tau}\delta(\vec{x} - \vec{y})\delta(t - \tau)h(\vec{y}, \tau)d\tau d\vec{y}$

but $h = Lf$, and $\delta = LG$, which suggests that our solution is

$f(\vec{x},t) = \int_{\vec{y},\tau}G(\vec{x} - \vec{y}, t - \tau)h(\vec{y},\tau)d\tau d\vec{y}$

which, with a bit of careful reasoning, can actually be proved to be the case.  So our solution is the convolution of the impulse response, $G$, with our input signal, $h$.

PRACTICALITY AND IMPLEMENTATION

WLOG, we can simplify and absorb the boundary and shape information into the metric of the Laplacian, and assume we are within a cubical domain.  With a further simplication, the more practical scenario, we can simply consider the ODE

$f''(t) + Af'(t - c) + Bf(t - d) = \delta(t - \tau)$ to compute a Green’s function for a ‘room’ with shape parameters A, B, c, and d.  Then to return our signal to the listener, we merely compute the convolution of the solution to this with the original input.  For more of this flavour, there is actually an impressive series of lecture notes at this location: mi.eng.cam.ac.uk/~rwp/Maths/vid09 , in particular l2notes.pdf .

But can we do more than this?  Perhaps, if we allow for nonlinear terms.  Or higher order, ‘soliton’ terms – although I’m not sure that this consideration would be helpful.  But whatever the outcome, at the end of the day, we get a system that has tweakable parameters, like ValhallaRoom, above.

So that’s all well and good, but what about the general case?  What happens if one considers an environment with a very complex spatial geometry indeed?  Would it be possible to say create a game environment and experience sound sources from different locations in the scene?  This would be an excellent (if not extremely difficult!) project.

Well, it turns out that this work has already been done.  Some researchers have managed to implement the full solution of the general wave equation, in a general game environment, like Unity, by precomputing the impulse response (requiring anywhere between hours and many 100s of hours of computer time), to ‘bake’ in the impulse response so that it can be convolved with sound sources as the player is walking around the scene.  Note: ‘baking’ is also used for other things that would be computationally expensive to do on the fly in-game, such as lighting.  Apart from the fact that these folks have a working solution, they also seem to have got around a number of the road blocks that have shortcircuited the practicality of such in the past, such as compressing the size of the data file needed to store the impulse response information for an outdoor scene from about 100 gigabytes down to a few megabytes (maybe using techniques such as fast fourier transforms? as well as simplifying assumptions), and using various tricks to make shifting geometry and movement for audio effects computable on the fly.

In the demonstration video, some really cool effects are demonstrated: diffraction of sound around objects, the characteristic ‘double-ring’ of a bell in a bell tower caused by sound reflections, muffling or muting of sound caused by obstacles being in the way, and more!  It is very, very cool.  If such a technology was available for Unity as a plugin/asset/resource from the Unity Asset Store, for, say, an affordable price (like ValhallaRoom) I’d easily snap it up in an instant.

The paper itself, describing a sketch as to the algorithm and method used in the video above, is located here.  The video itself is downloadable from this location.

PLANS / PROJECTS

Consequent to this summary of the basics and also the state of the art, doubtless the question might be raised, where does this leave me on this matter?  Well, as mentioned before, I’d love to use the technology demonstrated above in a Unity game.  As a secondary consideration, I’d be interested in having a play with a simple reverb signal modifier, seeing if I can hack one together, as described above.

Regardless, I’ll be sure to write if I make any progress along such lines.

### Django on GAE – final thoughts for now

May 10, 2013

I came across the following project the other day: http://code.google.com/p/gae-django-cms/ .  It works right out of the box, and seems fully equipped with admin and all sorts of other useful things.  Additionally, it seems quite easy to integrate it with other (simple) django projects, like django-monetise.  Consequently this is probably where I will park my investigations of GAE for now.