Haptics, dark glasses, and a broad brimmed hat

December 7, 2015

There has been a reasonable amount of hype regarding things in this sort of area, so I’ll keep this reasonably short.  Essentially my interest in writing this communication is to indicate what I’m looking for in a portable working environment as a ‘laptop replacement’.

To cut to the chase, the key thing that I’d be after would be a VR interface hooked up to a pair of modified dark glasses, and haptics to provide me with the sensation of typing on a keyboard, or moving a mouse.

Then one could essentially have an AR/VR type interface that one could use almost anywhere, and which could run off the computer in your pocket, or a workstation at home or the office.  So quite practical, really.

2016 should see some movement in this general sort of direction, but my guess is that it’ll take 5 to 10 years for the area to truly come into its own, and maybe 10 to 15 to properly mature.

End to end testing in kubernetes

December 7, 2015

Something that I continue to be interested in is the possibility of end to end testing using kubernetes. The project has come a long way since its launch on github a little over a year ago, and it is interesting to see where it might head next. The purpose of this post is to explore the focus of my own particular fascination with it – the possibility of end to end testing complex workflows built of multiple microservices.

Currently kubernetes does have an end to end test suite, but the purpose of this seems largely to be for self service of the kubernetes project. However, there are the seeds of something with broader utility in the innards of the beast. Some contenders downstream have started building ways to render the possibility of writing end to end tests for applications easier, such as fabric8-arquillian for openshift applications. More recently, about twenty days ago, there was a pull request merged that looks like it might go a step further towards providing a more user-friendly framework for writing such tests. And the documentation for the system is starting to look promising in this regard, although a section marked examples therein is still classified as ‘todo’.

But why this obsession with end to end testing this way? Why is this so important? Why not simply as a tester write browser tests for an application?

The problem is that browser tests do not capture a lot of what is going on behind the scenes, but really only the surface layer, the user facing component of an application. To truly get more immediate feedback about whether or not a particular part of a workflow is broken, one needs something that can plug into all the backend machinery. And, in the world of the modern stack, that essentially means something that can plug into containers and trigger functionality / or monitor things in detail at will.

Kubernetes is of course not the only technology that has promise here. The more fundamental building block of Docker is starting to become quite versatile, particularly with the release of Docker 1.9, and the Docker ecosystem (Docker compose, swarm, and machine) may yet have a role to play as a worthy competitor, in the quest for the vaunted golden fleece of the end to end testing champion merino.  Indeed, anything that might well provide a practical and compelling way to end to end test an application built out of multiple containers is of interest to me, and although I am currently sitting on the kubernetes side of the fence, I could well be convinced to join the docker platform crowd.

My current thinking of a system that provides great end to end testing capability is the following: it should use docker (or rkt) containers as the most atomic components. One should have the ability to hook into these containers using a container level or orchestration service level API and activate various parts of the micro service therein which have been built in such a way as to facilitate a proper end to end test. There is however a problem here though, and it is due to the need for the security of the system to be watertight. So altering production code is understandably not particularly desirable here.

Instead, it would be great to somehow be able to hook into each docker container in the pathway of an application, and read the logs along the way (this can be done today). Also, however, it would be great to be able to do more. One idea might be to write code in the ‘test’ folder of the application next to the unit tests, that will only be run when the testing flag is set to true. Or, alternatively, maybe one could add api endpoints that are only opened in the staging environment, and are switched off in production (although this still of course may introduce unacceptable risks). If one had these additional endpoints, one could then read the state of particular variables as data flows through the application in the test, and check that everything is functioning as it should be.

The alternative to directly debugging code in this way of course is to simply log the value of a variable at a particular point, and then read the logs. Maybe this is, indeed, all that is required. Together of course with write tests for all of the standard endpoints of each container, and checking that they are working as they should be. There might be an elegant way of writing helper libraries to log something out to a docker log file for purposes of testing only, and then expose this information cleanly via some form of endpoint.

Anyway, the area of end to end testing multi container applications is quite an interesting one and I am keen to see how it develops and eventually matures. Together with monitoring tools like Prometheus, there is a large amount of potential for good work to be done in this direction and I am excited about where things might eventually lead with this.

Thoughts regarding webvr

August 24, 2015

I recently came across the work that mozilla have been doing on virtual reality for the web, or webvr.  This is interesting for the reason that the promise of moving beyond dynamic webpages and instead towards a fully immersive experience with cyberspace offers a tremendous opportunity for improving user experience, as well as extending the capabilities of the web to provide information services, not to mention improving user productivity.

In particular, the changes that will likely start to be seen over the next few years or so – led by organisations like facebook and maybe game developers, are quite reminiscent of William Gibson’s description of the web in Neuromancer.  It would truly be interesting if retail of the future was conducted not by the current browse and click of amazon or ebay’s current websites, but in immersive VR environments staffed either by real people, or more likely by programs running off deep learning algorithms, or other more sophisticated techniques and with more powerful back-end tooling / databases / processing architecture running the minds of VR shop assistants.  Not to mention the ways that people might use the technology not only for retail, but for community, or whimsical purposes, or means of artistic expression.

Alternatively, it is possible that this sort of change might complement the adoption of ways of viewing data presented by the hololens project, or other augmented reality initiatives.

The key problem with the work that has been done to date, however, is that the ways of interacting with the worlds therein divorce oneself dangerously from the real world around one.  At least in front of a computer one can hold a conversation, or turn and look somebody in the eye.  Perhaps it might therefore be useful, therefore, to envisage a future where instead of wearing a VR headset one has a pair of glasses with a lightweight VR computer, bionic eyes, or even a direct or indirect neural interface that would recede based on particular preferences set by an individual, or via the judgement of a proto-intelligent robot ‘cortana’ like secretary.

As a minor admonition, I think it is important that one is not suffocated by developments like this, and probably not be overly hasty about embracing nascent / young technologies.  But assuming that this is going to be the future, I think it is important to understand it, and understand the possibilities presented by it, while thinking about how to still maintain a healthy connection to one’s humanity.  So coupling the coming changes with augmented reality, the older technologies as well, and trying to shrink and detach things from getting in the way of real authentic conversations and communications, as well as real and authentic community, I think these are things that would be important to bear in mind, and I will certainly bear in my mind, were I to consider adoption of anything like this in the future – or experiment with developing toy examples with any of the associated tools or SDKs.

A few developments that I think are interesting and worth following

July 2, 2015

No particular updates on my game project at the moment unfortunately.  I have been thinking a little about a couple of other things of a slightly more abstract nature – namely model selection and generalisation of the concept of the idea of the radical of a number – but these, too, are works in progress.  Of course, Unity 5.1 is now out with the first phase of Unity’s new multiplayer networking API, but I think I’d like to take the Bolt and Zeus client-server architecture for a spin first, and see where that leads (though interestingly enough, it now appears that Bolt has been acquired by Photon, one of the bigger players in multiplayer networking who have, amongst other things, targeted Unity3D games as part of their strategy).

So instead of talking about what I’m up to currently (in terms of my hobby projects) I thought I might instead share a few findings with you, the reader, that I think are useful and important.

1. Desalination using nanoporous single-layer graphene

One of these things is a recent article (as of March 2015) published in nature nanotechnology, here, on desalination (news digest here).  There are a number of things here that are significant:

  • The use of nothing except for graphene membranes, ie, carbon, so potentially a fairly cheap thing to produce
  • The potential for an almost 100% salt rejection rate
  • The high potential throughput (up to 10^6 g m^-2 s^-1), albeit induced via an applied pressure differential

So quite exciting.  It looks like, with an appropriate implementation, that this sort of technology could lead to relatively cheap clean and fresh water (certainly cheaper than current techniques, which are based on either brute force methods (such as heating the water and collecting the vapour), or electrodialysis reversal (which has been in commercial systems since the 1960s)).  And water is fairly important to have to hand.

2. Single stage to orbit spaceplane ‘Skylon’ on track, target date 2019-2022

Reaction Engines Limited seem to be making good progress on their SSTO spaceplane concept.  This relies heavily on the Sabre engine, which through 3D printed nanotube components, has the following capabilities:

  • Can cool air from 1000 degrees celsius to -150 degrees celsius in 0.01 seconds.
  • Using proprietary methods and techniques, the critical components of the engine are impervious to frosting over at subzero temperatures.

In the last few years, the company has had the following nods of approval from various governmental bodies and organisations:

  • A 60 million pound funding injection from the British government in 2013
  • Feasibility study conducted by the ESA, leading to their seal of approval in 2014
  • Feasibility re-affirmed by the United States Air Force Research Laboratory in 2015

Furthermore, it now appears that the project is ramping up, with assistance and expertise being provided to REL from the United States, and the hiring of various people to the program with decades of experience working in the industry.  So very promising.  The target date for potential test flights of the full blown SSTO craft could be as early as 2019, with supply runs to the International Space Station by 2022.

The great thing about this project mainly is cost / kg in delivering a payload to orbit.  Building a spacecraft using these techniques could lead to stupendous gains in efficiency, decreasing costs from the current £15000/kg down to £650/kg.  In other words, this development could lead to the solar system being opened up to for potential commercial use (albeit, hopefully regulated by an appropriate global agency), and would certainly make it possible to construct interplanetary spacecraft (for manned missions say to Mars), or logistical waypoints for the support of asteroid mining operations in orbit from shuttle runs, for instance, at significantly reduced cost.  Naturally, this in turn (initial support of targeted asteroid mining operations say within the window 2030-2040) would address another problem, which is the increasing scarcity of rare earth metals (although recycling could be a partial solution there).

3. Agriculture in arid areas

Something that I continue to follow are the developments in the still as yet nascent area of farming in arid areas, using little more than seawater and sunlight, to produce food for consumption.  The reason for this interest is that there is no shortage of seawater and arid areas relatively close to the sea in the world, so there is a considerable opportunity for innovation and growth in this area.

There are a few projects in particular that I am interested in here:

Both Seawater greenhouse and Sundrop farms use a similar form of system to operate – they pump saltwater either from the water table (if close to the sea), or from the sea, into greenhouses.  Evaporation driven by solar energy then causes the water to cool the greenhouse and irrigate the plants inside.  This is a gross oversimplification, of course, and there has been decades of work done to polish this general outline, but that is the idea.  There are certain other risks of course that one needs to deal with with such an operation as well, such as maintenance costs, not to mention how one might deal with a 5 metre sea level rise, or occasional storms.  Regardless, it appears that now the technology has become mature enough in this instance to start paying dividends – in a recent press release Sundrop farms announced their partnership with Coles supermarkets.

The ISEAS project is slightly different.  It uses saltwater ponds to grow shrimp and fish for food, mangroves to purify the saltwater, and plants grown with the purified water to provide nutrients for the ponds (closing the loop) and also provide oil (in their seeds) for biofuel production.

So it looks like there is a fair bit of promise in this general direction.

Some thoughts on developments in enabling technologies over the next fifty years

May 7, 2015

Hi folks,

I thought I might share a few thoughts with you today on something that I’ve been thinking about for a while.  It is a fairly standard problem in computational mathematics, and arises naturally when dealing with solutions to partial differential equations with geometric invariants involving tensors of relatively high order (4+, eg, elasticity tensors in materials science).  Namely, the matter of solving these equations over a 2 to 4 dimensional domain using finite element methods, ie, numerically.

It is simple enough to solve Laplace’s equation or the heat equation over a square domain, but the problem comes up when one increases the number of dimensions – or when one introduces a non-trivial (read, non-flat) metric tensor, such as when exploring numerical solutions / computational simulations of solutions to the equations of general relativity.  For the former problem, such is easily solved on a desktop computer; for the latter problem, one needs a relatively powerful supercomputer.

Why? The key problem is the fact that, as one increases the number of dimensions in the classical approach to solving a system of equations numerically, the time to converge to a solution (if a solution exists, but that’s another matter, entirely, involving the growth and control of singularities) increases exponentially.  For instance, for a square domain 100 finite elements wide, one needs to perform operations on 100 x 100 units maybe 50 times until convergence.  For a cubic domain, it is 100 x 100 x 100 units, another 50 times.  For a quartic domain, it is 100^4 units, another 50 times.  So the issue is clear – it is devilishly hard to brute force solve problems of this nature.

For this reason, contemporary approaches tend to use sparse matrix methods (for ‘almost flat’ tensors) or impose a high degree of symmetry on the problem domain under exploration.  But this doesn’t really solve the underlying problem.  Surely one must be able to find a better way?

Well, maybe with quantum computers one might be able to.  A recent paper (2013), as well as this slightly older document (from 2010), suggest one can solve linear systems of equations exponentially faster using a computer of such nature.  So, if true, this is encouraging, as it would render computational problems like this slightly more tractable.  The developments at Microsoft Station Q on topological quantum computers, and the progress on more conventional quantum computers by researchers at other labs, such as IBM, who recently made a breakthrough in error correction (extending possible systems of qubits to the 13 to 17 qubit range), suggest that the era of quantum computing is not too far away – we may be less than 20 years away from desktop devices.

In a moderately more speculative vein, I am intrigued by the more futuristic possibility of going beyond quantum computing, into the domain of what one might call hyper computing.  This would deal with systems of ‘hyper qubits’ that are ‘hyperentangled’.  That’s probably a bit vague, but I sort of had in mind atomic components of the system that had multiple quantum numbers, each of which were entangled with the other atoms of the system, as well as entangled internally.  So essentially one would have two ‘strata’ / levels of entanglement.  The key idea is that it might be possible to scale computing power not linearly as with bits, or exponentially as with qubits, but as 2 tetrated by the number of ‘hyper-qubits’ hyper-entangled.  That would be a stupendous achievement, and, yes, futuristic.  At the very least, it would make problems such as what I have described above much, much easier to solve, if not outright trivial.

For a slightly better idea of how this might work, the general idea might be to entangle a system (such as a pair of photons) in N degrees of freedom, as opposed to merely one for a standard qubit, and then consider then entangling this system with M copies of same.  So there would be two strata to the machine, ie, it would be an (N, M) hyper-entangled system.  Then, if one could scale N and M to sufficiently high extent, potentially by increasing the complexity of the system’s constituent ‘atomic systems’, then I suspect one would essentially have a system that grew in terms of some number tetrated by some other number, the latter of which would grow as some monotonic increasing function of N and M.

Unity Networking looks like it is getting closer!

April 15, 2015

Exciting!  It looks like the first iteration of Unity Networking might be implemented in Unity 5.1.  I’m quite enthused by this, as this is the key development I’m waiting on before investing heavily in building the next version of my multiplayer networking project.  Well, not necessarily.  Recently I acquired a license for the bolt engine, which looks like a reasonable alternative contender for this sort of functionality in Unity.  So I may actually proceed with that for the time being, and potentially also the foreseeable future.

In other news, I was working quite hard to deploy a custom build of browserquest on google app engine a month or two back.  I’ll spare you, the reader, the details, but basically I found the main obstacle to be documented in this stack overflow issue, which led to the author of same raising this ticket.  So it doesn’t look like this sort of thing is possible at the moment – at least on google app engine.  On amazon elastic compute cloud though, it should certainly be quite doable to implement.  Maybe one might be able to write a lambda function for the nodejs server requests.  Now that would be an interesting learning experience.

Dungeons and Dragons – video and alpha brainstorming

February 28, 2015

Hi folks,

I just thought that I would provide a brief update as to my current thinking on my dungeons and dragons project.  As mentioned in a previous post, a key barrier to progress here was having access to a good in game level editor.  At first I thought opened was the best thing that there was in order to work with, and found it quite limited when I played around with it.  However, I recently discovered that it is possible to obtain runtime level editors from the unity asset store, relatively affordably.  So I followed through and purchased one, downloaded the data, and then ran it locally in Unity 4.6.2.  You can see the results of my investigations below:

As you can see, it is possible to raise and lower terrain, and add objects to the scene, such as buildings, trees, and jeeps, etc.  So basically everything that I was after, and more.  Furthermore, the package provides complete access to the underlying source, so it is eminently feasible to plumb additional functionality into the program, which I think is quite exciting.

Hence, it now becomes possible to start working towards a first version of my dungeons and dragons dungeon master style multiplayer game.  In particular, I think there are a number of things that I’d now like to do:

  • Plumb in the multiplayer functionality from my previous project.
  • Introduce a simple and straightforward database model for players and dungeon masters.
  • Allow players to spawn in a world in an appropriate way (without falling forever).
  • Allow the dungeon master to move players around.
  • Allow the dungeon master to switch their camera to a creature under their control and move it around.

There are other things I’d like to do, but will probably defer for a later release:

  • Allow the dungeon master to switch their view to a player, default: passive (optional: override player controls, and notify player that their controls are overriden).
  • Saving and loading levels.  The runtime level editor that I acquired does have a save and load functionality (it encodes levels as text files), but it doesn’t work quite the way I’d like it to currently.  Ideally levels should be written as blobs to a database with a secondary key being the user who generated the level so that they only have access to their level list.
  • Give the dungeon master access to master database controls, eg, a switch to reset player positions (if some of them have started to fall forever, for instance).  I’d probably like to give players a reset switch, too (but limited to themselves only, of course).

And then, in a release after that:

  • Enable persistence of player position in each of the worlds in which they have played.  So for instance if player Rodriguez has played Bob’s level ‘OrcTown’ and Jill’s level ‘CottageIndustry’, if either respective DM loads said levels and then Rodriguez logs back in, Rodriguez should appear with the position and rotation coordinates he last held while playing that level.

Plumbing in the multiplayer functionality should be relatively straightforward.  I will need to create prefabs for the players, of course, or at least migrate the ones in my earlier project across.  I will need to create any necessary prefabs for creatures that the dungeon master is to introduce into the world.  I will need to reintroduce a lobby, the ability to log in, have passwords authenticated, and create a game (to become the DM of that game) or join a game (that a player has created).  A messaging server will need to be created (using smartfox, for instance, though that may change with Unity 5), and some sensible database structure built.

On creating a game, a player should have their ‘DM’ flag in the player table set to ‘true’.  If a player joins a game, their ‘DM’ flag should be set to false.

A game should not be joinable (in the lobby) if a terrain has not been deployed.  In this instance the CanJoinGame flag in the player table for a DM who is preparing a level should be set to false, and the game should not appear in the list of available games in the lobby.  If a game is full (eg, 4/4 players), it should not appear in the list of games in the lobby either, but that is something that can be deferred until later (also, one might want to distinguish between ‘public’ and ‘private’ games.  If a game is public, anyone can join, if a game is private, only users who have been added to the dungeon master’s campaign list in PlayerCampaign should have visibility of the game.  Ultimately, too, one would like to be able to filter games in progress so that one could find the one one wished to play exclusively.

Once a player has joined a game, they should spawn just above the terrain say in the centre of the map.  Since terrain can be raised and lowered, this will be an interesting (but important) problem to solve.  Alternatively, perhaps the players could appear as tokens that the dungeon master could place (or they could place) at their will and leisure.  This might be a better way to go, where players would have a trimmed down version of the dungeon master’s interface, but they would still be able to drag and drop themselves, once, into the game.  Then they could jump between the global map view, and first person perspective from any of the creatures they control (the default being just one, their character).

This leads to the need for a fairly important feature in the first version of the game: the ability to toggle between the global controller / camera, and to avatar into a currently controlled token, and use its controller / camera.  This may be difficult to do but I think that the payoff will be worth it.

Moving tokens and objects around should be relatively straightforward, as that is already built into the base template.

Kubernetes, Docker, and Mesos

November 30, 2014

Recently I noticed a couple of interesting new developments emerging in the area of cloud deployment and distributed computing with virtual machines. A while ago, I experimented with dotcloud (in about 2011, 2012) when it was in its early start up stages, and found the tools therein a joy to use and play with setting up (and running) sites based on dotcloud’s dedicated servers. Later on, when dotcloud moved into their production phase, I let that particular venture slide, but found their release of Docker to be a further amazing development.

Docker in and of itself is very powerful, but does not afford the full flexibility and utility that dotcloud’s engine and tooling does, as their tooling allows / allowed one to assign a group of docker machines, with some form of ‘scaling’ factor, and with different groups of machines being assigned to particular microservices driving a web application. So, for instance, one might have a MongoDB database as a backend, and a java service for the frontend to implement a Spring application. Hence I was pleasantly surprised to discover, through reading the 5 year milestone blog entry for the release of the go programming language, that google has deployed a project called Kubernetes (from the Greek, meaning ‘helmsman of a ship’), also on github, that does exactly that – where one has the idea of a ‘pod’ of docker containers, with isolated volumes, and replication controllers, and microservices. All very exciting!

Both Docker and Kubernetes, it turns out, are built on Go, which is a programming language that has been designed and built / optimised specifically for cloud programming. Having discovered these new animals on github that seem to me likely to become systemically pivotal to the next steps in the web’s continued growth and evolution, I did some more reading on projects that have been implemented with Go and discovered the existence of Apache Mesos, which is essentially a way of abstracting computational resources from individual virtual machines, and is a step towards an operating system for distributed computing. Mesosphere on Github is a project consortium whose mission seems to be in line with said vision – and they have a project “Kubernetes-Mesos” that I think would be well worth watching, as well as “Marathon” which aims to be an “init” analogue for multiple VMs on a Mesos cluster, I think, may also become quite pivotal in the months and years ahead.

For a more mainstream take on these matters, there is this article at Wired that might be of interest for further reading on google’s kubernetes initiative. Here is a quote from the article in question:

“It’s part of an arms race. There are literally dozens of tools coming out,” says Solomon Hykes, the chief technology officer at Docker and the driving force behind the company’s software containers. “But Google joining that battle–with code that comes from their massive experience–helps show where this kind of thing will go.”

All quite fascinating.

But what does this mean for everyday folk or interested hobbyists like me or you?  Well, certainly, I would not mind deploying a (very small) Kubernetes cluster using rented cloud resources (eg, on GCE, Amazon EC2, Rackspace, Azure, etc) as part of a hobby project at some point.  The ability to scale is also enormously compelling, particularly for a small player who is interested in managing their OPEX to accommodate growth in their customer base.  More fully blown type implementations, such as incorporating Apache Mesos, are perhaps more suited to bigger players, say corporate type programming / systems engineering.  Of course, the number of corporations that would actually make use of the sort of tooling described in this article in the next one or two years could probably be counted on the fingers of one hand (after all, how many Twitters and Googles does the world have, today?  Not very many, I would wager).

However I am reminded of the apocryphal quotation (circa 1943) that there would maybe be a market for perhaps 5 computers in the world.  So maybe it will be the case with technological innovations such as this as well.  Maybe by 2030, we will see relatively widespread adoption of this sort of thing by large companies, even SME, or even individuals, routinely using 1000s of VMs, as a user of a computer today would not think twice of using billions of transitors today on a single machine.

An open source in-game unity editor

November 15, 2014

I’ve been continuing to think about my aspirations for my Unity game, in particular, empowering players to build dungeons and function as a live ‘gamemaster’ / ‘dungeonmaster’, where rooms can be placed while players are in the game, and monsters / spawns can be dragged and dropped into the game. Also I’m keen to build an application / game where dungeon states can be saved or persisted. Regardless, as I mentioned back in this post, that seemed like an awful lot of work if I sought to build such from scratch, and on further reflection I have since come to the conclusion that it would be far handier to be able to adapt some resource that someone else had built to my purposes, or use some tool that another group had constructed in order to accomplish my objective (rather like I did with SmartFoxServer for enabling multiplayer in my game, with its quite user friendly API and accessible examples).

Hence I was quite excited today to discover that someone seems to have done just that, or at least built the germ of an idea that I and others can use. The project is OpenEd, and it claims to be essentially an open source in game Unity editor. Very cool! The github repository is here.

Certainly the task I have set for myself seems slightly less impossible now. I look forward to a time when I can dust off my old project and start trying to introduce this new functionality, hopefully around Christmas or the new year. I’ve already done a bit of work in simplifying the project structure; what I need to do now I think is to clean up the assemblies (there are about 4 or 5 weirdly named projects when looking at the scripts in MonoDevelop) / or build the scripting again from scratch, and also probably consider switching from MonoDevelop to Visual Studio, the flagship Microsoft IDE, which, in other good news, is now free for developers to use, and which has a Unity plugin for facilitation of Unity game development therein.

Experiences at the Unite Australia conference in Melbourne, 2014

November 8, 2014

Hi folks,

I thought I might write briefly about my experiences at the Unite Australia conference that was held recently at the Melbourne Convention and Exhibition Centre – right next door to the building known affectionately by locals as ‘Jeff’s shed’.

Broadly speaking my first impressions were rather wet. It had been pouring like crazy the night before, and there was an amazing thunderstorm. The trains also were running late. By the time I had made it to the convention centre, I felt mildly drenched, and doubly regretted my choice of formal attire when I noted that all the other folk there – mainly students and young rapscallions in the games industry – were dressed more casually.

It was a fine day. I think the event was sponsored by a few notable players, including the local branches of Intel and Microsoft. Swinburne also made an appearance – they had a booth set up. And there was some graphics card manufacturer as well.

Soon it was time to witness the keynote address, and it all looked quite interesting. Mainly the discussion revolved around the upcoming changes in Unity 5, and the big improvements from Unity 4, mainly due to the new and improved light maps and the way that sound was managed. There was a preview of the new unity user interface system, and it was mentioned how there were performance improvements with that over OnGUI for some reason. There were also a few hints as to the future, with a preview of an evolving prototype slated for the ‘5.x’ release cycle of the Unity engine – this was a feature called ‘Director’ (I think that was what it was referred to as) where one could essentially generate some form of workflow for a game, and things would appear in sequence via some form of state machine, demonstrated by a 2D runner with crates spawning, buildings moving in the background on loop, and various obstacles appearing in a pseudorandom fashion. Other than that, there were a few figures bandied around, such as that the Australia – New Zealand games industry actually does seem to generate a respectable chunk of GDP ($80MM I think), and roughly 70/80% of developers use Unity. And also references to the phenomenal growth of Unity itself over the ten years since its inception, and the responsibility that places on the key developers.

Following the keynote I attended a talk upstairs on UNet, the upcoming multiplayer functionality that I’m quite excited about. The fellow there provided a ‘hack-and-play’ demo where he demonstrated migration of a game written without multiplayer to a game with multiplayer. The philosophy was, that, contrary to the generally accepted wisdom in the games industry where ‘adding multiplayer ‘at the end’ of the release cycle means that you really are only about 50% done’, the new workflow for UNet using Unity really does allow relatively easy migration of a game from single to multi user usecases. However, I guess the questions that I had (that I probably should have asked then, but didn’t really think to at the time) were as to whether UNet allows for a centralised messaging server (like SmartFoxServer or many of the other offerings), or whether it was merely a peer-to-peer / failover type multiplayer shared game. Which is fine, but I like the idea of having a server, as it allows one to control other aspects of a game or application, such as data.

Next I attended a talk on best practices for Unity. This was quite useful to me, in many respects. Performance was a key area of focus, namely the warning to avoid using ‘find’ functions in the ‘update’ part of a unity application, as otherwise things can slow down quite a bit. In particular, calling expensive or clunky code for every mob on every frame can really cause performance issues. Another hint was making use of garbage collection oneself, dependent on whether one wanted to regularly do garbage collection or only rarely, depending on how the game flows.

Other aspects were kind of obvious but often not managed properly. For instance, having one folder for sprites, one folder for sounds, etc – ie a logical structure to the project. Also having logical naming conventions. I guess for me, this is something I still need to get right since my project is a bit of a mess with assets and namespaces not really aligned properly – particularly from multiple migrations from earlier versions of unity.

Finally another best practice was limiting scope in order to ship – in terms of the ‘triangle’ of project management – scope, resources and time. I guess in my case, it might be instructive to put aside my megaproject at some stage and focus on just shipping something, to gain an idea of what is involved.

After lunch, I attended a talk on ‘Unity for Architecture’ – a fork of the Unity game engine that is designed for architects and is a work in progress. It was a fairly impressive demo, wherein they demonstrated import of terrain map files to generate terrain – based on real geodata – and then imported a house from google sketchup next to the grand canyon. But the most impressive part of the demo was the ability to hold the camera / viewport still and iterate for a polished view / preview of a house / home. The resulting images were quite realistic and would definitely be brochure worthy. In fact, I was so impressed with this talk that I attended a follow up talk upstairs afterwards. One person asked whether one might be able to pre-render a path and then ‘fly-through’ a building with the photo-realistic images, and it seemed to me that that was quite an interesting question, although potentially extremely computationally expensive and maybe even impractical with even top-end modern machines.

The last talk I attended was one given by a graphic artist who was not a Unity employee, but still had some amazing experience, having worked on animated sequences for Saints Row 3, Civilisation 5, GTA LA Noire, and Darksiders 2. It was quite interesting to hear about the experiences that the fellow had had working on particular examples, and how he had managed his time and delivered to schedule. Also to hear how at one stage he had run a studio that built 2-D iOS games was fascinating. For me, it was a rare window into the industry.

All in all, I found it quite an interesting conference. Talks that I didn’t go to included one given by a Google engineer on ‘the new world of advertising / monetisation’, there was a talk on the new partnership between the windows store and unity, and there were talks on the new sound system, on the new user interface for unity, on xbox/ps4 game programming using Unity, a talk on the future of Unity, and a few other talks that all sounded fascinating. Unfortunately, I couldn’t go to them all.

I didn’t end up talking to many people in person, maybe a little silly considering that part of the allure of these events is ostensibly to have the chance to network with likeminded souls, but there is certainly always next year. Maybe next year I’ll remember to dress down a little, too.