Unity Game Project – some thoughts

July 17, 2016

A few months ago I revisited my Unity Game project, and re-examined the feasibility of having the ability for a dungeon master to modify a level in real time while players were running around.  The idea was for it to be quite minimal, in terms of the fact that there would be no combat mechanics, unnecessary animations, or fancy character progression / skill development / special moves etc.  Just a world in which people could wander around, with static NPCs / monsters and objects like houses and trees.

Denis Lapiner’s multiplatform level editor looked promising to me a while ago, so I looked into the mechanics of it at the aforesaid juncture.  The way it actually works, it turns out, is to create a terrain object, serialise it, then save it as a blob to local memory, which is then reloaded as a second scene.  Hence my immediate hopes of doing everything in one scene were readily dashed.

However I’ve thought a bit more about this problem more recently, and I have renewed hope that something interesting might still be possible.  I think, with some skill and cunning, it might be possible to modify the level editor scene so that two or three times a minute, a snapshot of the terrain is taken, and then the diff is applied to regenerate a preview scene – wherein also the players and everything else is pushed (as appropriate network objects) – also viewable by the dungeon master.  So, in essence, the DM would see two copies of the level (two subscreens on their screen) – one, being the level they are editing, and the second, being a non-interactive snapshot of the scene (an ‘autosaved’ version, if you will), where they can camera pan around, or even avatar into, but not interact with via the level editor.  The second screen would be updated several times a minute (or, if one had a truly powerful device – maybe not now, but at some point in the future – several times a second, so one could have a refresh framerate).  One could also imagine the DM being able to ‘minimise’ the editor portion so that they could focus on DM’ing in the ‘player’ or ‘viewer’ subscreen.

The viewer subscreen need not be totally toothless, however, as prefabs might be instantiationable within the level using the standard unity mechanics.

The players themselves would see a separate scene that just shows the current snapshot of the DM level, like the ‘viewer’ subscreen for the DM, but this would be their entire world.  They might also be able to instantiate prefabs as well, but they would not be able to sculpt or modify terrain like the DM.

In terms of the desired experience for a player, they should, one every 10 to 20 seconds or so, see the level shift around them to reflect the latest changes that the dungeon master has made.  Something that would need to be caught, however, would be the idea of where the ground is.  The player z coordinate, therefore, would need to be constantly remapped as relative to the difference in their height above the terrain at the (x, y) position they are located at.  Players would also need to be able to reposition their avatar via a third person perspective if they get stuck in a wall or an object that has suddenly appeared, too.

The interface that I plan to use to manage the networking will still be Photon Bolt, with Zeus as a server running somewhere on amazon ec2.

So that actually starts to seem like it would be vaguely doable.  And maybe, at some point in the future, with breakthroughs in personal computing, one might be able to ramp up the frame refresh rate, as well as maybe enable the experience for VR.  Possibly game engine improvements might also enable one to increase the scene refresh rate as well.

Some thoughts regarding M-theory

June 14, 2016

One of the things that I was thinking a bit more about recently was the nature of what the theory previously known as string theory is supposed to be about.  There is some murky intuition – that Feynman diagrams should somehow be lifted to dealing with Teichmuller spaces, and the amplituhedron, and such like.  But it has occurred to me that this is rather a straw concern for what is really at the heart of the matter, or the structure that people are concerned about here.  For it seems that there is a category error, so to speak, or a particular cognitive fallacy, in that the pre-school analogy for the structure associated to what people are seeking to study, is viewed to be erroneously one and the same as the true underlying structure.

This is not obvious, but it has become slightly clearer to me now over the years, now that I have had time to properly study exotic geometries and their associated geometric invariants, information theory, and higher category theory.  Unfortunately, the malaise associated to this has become a rather peculiar game of information and disinformation – it is, in fact, hard to tell where the pieces sit.  But, as always, it is helpful to be guided by good mathematical and physical intuition.  Truth is truth.

So, what is string theory / M theory – what is it actually supposed to be about?  Well certainly one simple statement that I could make would be that it is an extension of L theory (concerning which there are many excellent results due to Ranicki), which in turn is an extension of the K theory which many deeply talented 20th century mathematicians have contributed to, such as Atiyah, the Bourbaki society, and Grothendieck.

But that is just replacing a term with a term.  So, what is K theory?  It is the cohomology of schemes.  And what is cohomology?  That tells you roughly how to categorise a differential form in a local chart on a manifold.  A scheme, per my admittedly simplistic viewpoint, can be viewed as an object which, within a local chart, may be associated to the twisting of two algebraic varieties (which can be viewed as charts of an Alexandrov space).  So, if {(x, y, z) | f(x,y,z) = 0} is one variety, and the other is {(x, y, z) | g(x, y, z) = 0}, then a scheme might be given by {(x, y, z) | \circ (f ; g)(x, y, z) = 0 }, where \circ (f ; g) is the composition of f by itself g times.  And there is in fact more than one way to construct an exotic twisting of f by g (there are, in fact, three): we can also act on f by g with repeated multiplication, or with repeated exponentiation.

So that is K theory, and that is schemes, or 1-schemes.  So, what about L theory?  Well, L theory concerns itself with 2-schemes, which can be constructed by the twisting of three 1-schemes together in one of five different ways – with operators \circ, \circ_(2) (higher order composition / tetrated composition), \star, \wedge, and \wedge_(2) (higher order exponentiation / tetrated exponentiation, or pentation).

Hence we have 2-schemes, and L theory.  There is a deep, deep well of structure waiting to be explored in that area alone.  But what about M theory?  Well, M theory concerns itself with 3-schemes, which are constructed by the twisting of five 2-schemes together in one of seven different ways – with the operators above, as well as \circ_(3) (pentated composition) and \wedge_(3) (pentated exponentiation, or heptation).

However, this is a slight simplification, because we also have exotic operators that act on 3-tuples of functions (rather than 2-tuples as above), as well as on 5-tuples of functions (these operators are related to pre-cursor geometries for cybernetic and meta-cybernetic structures, respectively).

Evidently the theory here is fantastically, wildly, stupidly deep.  The structure is gloriously fascinating and elegant in its form.  And, with this viewpoint, one is now prepared to read the papers of the leading lights of the field with a new perspective.

Additionally, however, we can ask the question – why jump from K theory straight to M theory? What is the reasoning for that?

I can hypothesise an answer – it is because, perhaps, that M theory is a 3-categorical construction – it makes most sense in the language of 3 categories.  Just as K theory deals with 1-categories, and function spaces as being the atomic objects, and L theory deals with 2-categories, and function-function spaces as being atomic (ie, functionals forming a basis for the space), we have that M theory deals with function-function-function spaces, or meta-functionals as a basis for the space.  Hence, the geometric invariants of M-theory are meta-meta-functionals.

Now, for reasons that I think it might be possible to bludgeon into logic, one might by a peculiar association conflate the four levels of subtlety traversed so far – ordinary set theory, or 0-category theory, through to 3-category theory – as being part and parcel with the dimensions of ordinary space.  So therefore the study of M theory is motivated as one can view the structure or scaffolding of the abstraction in itself as manifesting the underlying structure of reality – or the meta-information associated to the theory.  Which is really, really weird if you think about it.  But maybe this is the case.

Another reason that the jump from K theory to M theory might have been made was for some reason of strange quixotic quest to discover a ‘theory of everything’.  But the problem is, this is only the fourth level of subtlety – there are a fair few more steps to traverse before one gets to aleph null.  Another four more, at the very least!

So that is my current thinking regarding this particular circus.  L theory and M theory, with their associated higher categorical abstractions, are areas of mathematics that are nowhere near as deeply plumbed as K theory, and which have at least as much structure, if not more.  Since K theory concerned a generation of the finest minds on the planet for a number of decades, I can see that these other areas could definitely warrant a certain degree of attention and respect.

It is perhaps unfortunate that such noble structures as these have been given short shrift in terms of the way they have been portrayed as to their applications to physics.  There are probably two reasons that things have become slightly strange here – one is that the public at large, and, in particular, the educated public, are a touch or two smarter and informed regarding physics than most full time intellectuals might be prepared to give credence.  However, they are not perhaps sufficiently interested in the large as to reason their way through the stick figure interpretation of the phenomenology associated to higher categorical descriptions of reality, and actually see that there might be more to it than ‘vibrating modes of a ‘string”.  The fact is, to my mind, there are no strings – this is just an artifact perhaps of conflating the arrows in a commutative diagram within higher category theory with actual physical objects, combined perhaps with some confusion as to tubes in Feynman diagrams.

The other reason is that the mainstay of academics that concern themselves with this sort of thing are perhaps not as informed as perhaps they ought to be.

Indeed, it is a pity, because the discipline suffers for the lack of an adequate popular science type explanation for many of these things, as well as a faithful portrayal of the associated objects.

A further confusion is that the structures themselves should describe the notion as to how to define invariants for them.  Or maybe that the way that invariants or physical principles are developed is considered as part of the theory.  But the received wisdom on this seems to have a certain lack of narrative in regards to its positioning, which can lead to confusion.  For instance, there is a fair bit of promising work in other areas of physics, such as within the information sciences, and machine learning, that could inform the discussion as to how to derive from first principles geometric invariants for a given structure.

A very foolish brain dump

April 2, 2016

Well, it is no longer really April fools day in many parts of the world, but I guess it is in the states, so I can probably get away with the above.  In short I thought I would write about a number of things that I think are interesting or that I’m thinking about, and maybe then sketch a short story idea or two.

Unity package / feature request

Something that I’m still interested in is a level editor for Unity3D that allows people to edit a level in real time while other players are running around within it.  Sort of like roll20, but in 3D.  There are a few technologies that I think are reaching towards this – the multiplatform level editor plugin here, and the scene fusion multi user unity editing plugin here.

The latter technology looks very useful for building games (as collaborative development and improved team productivity is really the market that it is angled towards, and it looks tremendously useful for that), and probably would double as sufficient for the purposes of moving people around a world and pretending that one was in a Roll20 style role playing game sandbox.  So I could certainly imagine that if there was a ‘master editor’ or admin user who could lock down certain parts of the unity editor then that would probably be enough for my use case.

The former technology does not extend unity but rather runs in game.  It allows the user to edit a scene at runtime and save the level to a serialised format that can then be loaded and run.  There have been many improvements to this over the last couple of years, however it does not really support what I’m after, which is the ability for somebody to add things to a scene while the ‘play’ mode of the game is active, and then export the current state of the level to a serialised format (after the manner of a ‘saved game state’).

Largely speaking, however, the code for the former is available and quite malleable to forking, so it is possible that it could be modified to allow me to then plug in something like the Bolt/Zeus networking stack (download links here) to support a multiplayer scenario.  However, I am lazy (and also time poor), so I find it much more efficient merely to write about what I’m after.  Scene Fusion probably would support what I want but is overkill for what I’m looking for, and I also would not have visibility of the code in the way that one can for Bolt and the multiplatform level editor.  I probably also wouldn’t be able to tinker with the networking servers in the same way that I could run Zeus on a rightscale managed amazon ec2 instance.

Ultimately I’m not after purchasing a game, or a piece of software that is a black box to me, but rather I’m keen to use something that I can tinker with, recompile the individual components, and learn a bit from.  Hopefully that makes sense.

Mathematics

I’ve made a bit more progress with a paper that I’ve been working on.  In particular I managed to prove existence and uniqueness of a lift of a particular function to the reals, and also came up with an interesting identity for a generalisation of another function, which may or may not be true.  Over the next month or two I hope to make a bit more progress.

Additionally I’ve been considering potentially to publish a presentation that I gave at a conference a few years ago, but I’ve been procrastinating a little in regards to that.

Quantum machine learning and the Fredkin gate

So apparently recently I discovered that quantum machine learning is a thing.  My interest in this area was piqued when I started wondering what might be next for AI after the triumph of DeepMind’s deep learning expert trained to play Go.

In particular it has struck me (and others) that with the advent of quantum computers within the next ten years or so, it might be possible to take the architectural subtlety of machine learning to new heights, and get closer to that goal of AI, for machines that truly think.  Indeed, many are arguing that this is the ‘killer application’ for quantum computing.  I’m slightly late off the mark here, since the inception of the field was a couple of years ago in 2014, but this is still a fairly young discipline and I think many exciting things lie ahead for it.

There are essentially two key observations in this respect: observation the first, that existing algorithms could run much, much faster on a quantum computer and with much, much more data.  However, much more exciting is observation the second: that quantum computers would allow one to construct much more complex and intricate algorithms for supervised and unsupervised learning.

Towards the objective of actually building such a device, there was a recent breakthrough achieved where researchers managed to construct a quantum Fredkin gate.  The reason this is exciting is because Fredkin gates are atomic to conventional computers: any logic operation can be devised by stringing a whole series of them together (rather like NAND gates in logic diagrams).

Of course this might not necessarily be ‘the’ critical advance but quantum computing is an area that is certainly seeing a large amount of progress at the moment.  Doubtless people are hungry for more computational power, beyond what is achievable with classical computers – and the time seems perhaps right, with the slow down currently being experienced in Moore’s law for run-of-the-mill circuitry.

Google compute cloud

Another thing that I recently learned was how deeply subtle and sophisticated google’s cloud computing operation has become.  With products such as bigtable, redshift, dataflow (based on Apache Beam) and others, Google has become truly a formidable contender in the area from a relatively late start on Amazon’s established lead just a few years ago.

I will not write much here because I do not understand too much about what google has to offer, but I thought I would point out how quickly it appears that this area has moved, and how sophisticated cloud computing services have become.  The price points do not seem too bad, either – I learned the other day that one can hire a t2.nano from Amazon now for about $3.50 USD a month, which is amazingly cheap (although, of course, you get what you pay for, which, in the case of a t2.nano, is next to nothing at all).

Automated testing of virtual reality applications or ‘websites’

As a tester (my current gig) I have been wondering as to what automation testing might look like for a virtual reality based application or website.  I think it could be quite interesting to write tests for such beasts.

Towards a ‘static’ virtual reality website on the web

I am also interested as to what a ‘static’ virtual reality website might look like.  In particular it would be nice to have say a room with bulletin boards on the walls and clickable links, or the ability to read a newspaper lying on a virtual table, for instance.

Integration testing of microservices

I’ve noticed that there is a trend in microservices testing towards using docker compose and jenkins to run tests based on a build pull request, to spin up a container for the project, and also containers for adjacent projects.  The idea is then that one hits endpoints on these boxes to verify that APIs are working correctly.  This is something that I would like to see better documentation for on the web, but so far most of the knowledge seems to be locked up in the heads of specialists, and there are not many public projects or tutorials (or courses!) geared towards teaching people how to do this form of blackbox testing.  Hopefully in the next year or two there will be some more chatter on the web in regards to this area, because it is certainly something that I’d like to do a bit more of.

Story ideas

In want of writing a story I thought I might write a few (short) story ideas.  Maybe I’ll post a short story to this blog tomorrow to make up for sidestepping my previous promise in said regard.

  • I thought it might be interesting to write something based in a world where some corporation exerts tremendous control.  Over time, the protagonist eventually finds their way to the executive of this company, and discovers a terrifying secret: that the Chief Executive Officer is actually just a magic 8-ball named Terence.  ‘The shareholders love him.’
  • A horse play.  A children’s story written as a play, where the main contenders are essentially just horses that do not behave themselves particularly well.
  • Writing about the present from the perspective of the past (eg, the way SF authors in the 1960s viewed today), which inserting incongruous references.  For instance, writing about the early years of the 21st century in a moon base, and then making reference to a contemporary event that actually has happened.

Short Story 1: All will be repeeled

January 31, 2016

Forward

As part of a slightly new direction for this blog, I thought I might aim to write and polish a short story, or something that could be a chapter in a book, once per month.  My reasoning for this is that in terms of my other interests, they are met either by other publication venues, or have been curtailed recently by lack of time (namely, my unity project, which is also hampered by the fact that source control of unity projects is a bit temperamental, to say the least).

I also might abstain temporarily from talking about some of my favourite topics (such as gene editing, nanotechnology (including 3D printing, bioengineering, …), quantum engineering, quantum computing, the docker/kubernetes stack, miscellaneous cool open source projects, spaceplanes, nuclear fusion (including the Wendelstein 7x stellarator), sustainable agriculture, vertical farming…) so that I can refresh my mind and look with fresh eyes upon these things in another few months, or maybe more.  I’d prefer not to oversell some of these topics, or write at too much length on them.  In terms of my tastes in such regard, I think writing sparingly is the best approach; I do not particularly see myself as a fanatical evangelist in terms of any of these topics, but more of someone who likes to point out things that seem potentially important, and then move on – to leave to others with deeper knowledge and established expertise, or those who are more connected to the centre of innovation in the area, the task to push these things.

Instead, I’m going to take a break from technical writing and pivot slightly towards writing of a more fictitious bent.

So, what now?  I think, huzzah, a story!  Yes, what fun!

All will be repeeled

Three antennae Capsaisin hadn’t see an Antelope like that since kernel Custard had saturated the colony.  It was a mess; who was going to send a postcard and tell the kitchen that the mapping had not been quite surjective?  Because it hadn’t – a few survivors (Sir Jective included) had held the fort against the terrible deluge.

Nonetheless, those that had not ab-sconed for baguette country had managed to profit from the experience.  There were ample supplies now of energy to restock and rebuild.

Cap’s job was a simple one in the colony; he was a judge, of sorts.  If workers had a dispute that required compensation, he’d preside over the dispute while lawyers settled the intricacies of who hadn’t quite pulled their weight in a seed gathering expedition, or of a poor soul who had followed the pheromone trail and ended up in a puddle, and sought recompense from the expeditionary overseer for misrepresentation of trail safety.

He would have sighed, if he could have sighed.  It was a thankless task, but better than scurrying around through the hills of the never never as he had in his youth.

Today’s proceedings were slightly different, however.  Three young interlopers were seeking a repealing; in particular, of an Orange.

“I must insist that the orange be returned to its former state, so that portions can be divided appropriately,” stated the first of the three.

“Impossible.  A protest against your demands.  Entropy cannot be reversed.”

“Aha, but it is a state machine.  A Mark, of chain – see these markings?  From the wanderers of the plains.  Apache Ant.  And you yourself stated that you are pro test.”

Cap thought for a moment.  “Hmm.  So it seems that this must needs be settled.  And I have in mind an ideal for resolution.”

“Speak.” said the third, who had been quiet until then.

“You spoke of a test.  I propose a challenge, a trial.  To bring back the herd from the den of toothbrush.”

There was a hushed silence.  “You mean…?”

“Yes, those that were scattered before the event of Safeway’s Premium Custard.  That went to the wilds.  Return them, and we shall discuss how the impossible might once again be possible.”

The court adjourned with relative alacrity, as had not been seen since the match between Chemical Andrea and Turncoat Susie.  Fire would soon fly, but not before the match was lit.

To be continued…

Takeaways from a recent spark meetup

December 9, 2015

Recently I attended a Spark meetup in my home town.  I learned about a few things.   In no particular order:

Haptics, dark glasses, and a broad brimmed hat

December 7, 2015

There has been a reasonable amount of hype regarding things in this sort of area, so I’ll keep this reasonably short.  Essentially my interest in writing this communication is to indicate what I’m looking for in a portable working environment as a ‘laptop replacement’.

To cut to the chase, the key thing that I’d be after would be a VR interface hooked up to a pair of modified dark glasses, and haptics to provide me with the sensation of typing on a keyboard, or moving a mouse.

Then one could essentially have an AR/VR type interface that one could use almost anywhere, and which could run off the computer in your pocket, or a workstation at home or the office.  So quite practical, really.

2016 should see some movement in this general sort of direction, but my guess is that it’ll take 5 to 10 years for the area to truly come into its own, and maybe 10 to 15 to properly mature.

End to end testing in kubernetes

December 7, 2015

Something that I continue to be interested in is the possibility of end to end testing using kubernetes. The project has come a long way since its launch on github a little over a year ago, and it is interesting to see where it might head next. The purpose of this post is to explore the focus of my own particular fascination with it – the possibility of end to end testing complex workflows built of multiple microservices.

Currently kubernetes does have an end to end test suite, but the purpose of this seems largely to be for self service of the kubernetes project. However, there are the seeds of something with broader utility in the innards of the beast. Some contenders downstream have started building ways to render the possibility of writing end to end tests for applications easier, such as fabric8-arquillian for openshift applications. More recently, about twenty days ago, there was a pull request merged that looks like it might go a step further towards providing a more user-friendly framework for writing such tests. And the documentation for the system is starting to look promising in this regard, although a section marked examples therein is still classified as ‘todo’.

But why this obsession with end to end testing this way? Why is this so important? Why not simply as a tester write browser tests for an application?

The problem is that browser tests do not capture a lot of what is going on behind the scenes, but really only the surface layer, the user facing component of an application. To truly get more immediate feedback about whether or not a particular part of a workflow is broken, one needs something that can plug into all the backend machinery. And, in the world of the modern stack, that essentially means something that can plug into containers and trigger functionality / or monitor things in detail at will.

Kubernetes is of course not the only technology that has promise here. The more fundamental building block of Docker is starting to become quite versatile, particularly with the release of Docker 1.9, and the Docker ecosystem (Docker compose, swarm, and machine) may yet have a role to play as a worthy competitor, in the quest for the vaunted golden fleece of the end to end testing champion merino.  Indeed, anything that might well provide a practical and compelling way to end to end test an application built out of multiple containers is of interest to me, and although I am currently sitting on the kubernetes side of the fence, I could well be convinced to join the docker platform crowd.

My current thinking of a system that provides great end to end testing capability is the following: it should use docker (or rkt) containers as the most atomic components. One should have the ability to hook into these containers using a container level or orchestration service level API and activate various parts of the micro service therein which have been built in such a way as to facilitate a proper end to end test. There is however a problem here though, and it is due to the need for the security of the system to be watertight. So altering production code is understandably not particularly desirable here.

Instead, it would be great to somehow be able to hook into each docker container in the pathway of an application, and read the logs along the way (this can be done today). Also, however, it would be great to be able to do more. One idea might be to write code in the ‘test’ folder of the application next to the unit tests, that will only be run when the testing flag is set to true. Or, alternatively, maybe one could add api endpoints that are only opened in the staging environment, and are switched off in production (although this still of course may introduce unacceptable risks). If one had these additional endpoints, one could then read the state of particular variables as data flows through the application in the test, and check that everything is functioning as it should be.

The alternative to directly debugging code in this way of course is to simply log the value of a variable at a particular point, and then read the logs. Maybe this is, indeed, all that is required. Together of course with write tests for all of the standard endpoints of each container, and checking that they are working as they should be. There might be an elegant way of writing helper libraries to log something out to a docker log file for purposes of testing only, and then expose this information cleanly via some form of endpoint.

Anyway, the area of end to end testing multi container applications is quite an interesting one and I am keen to see how it develops and eventually matures. Together with monitoring tools like Prometheus, there is a large amount of potential for good work to be done in this direction and I am excited about where things might eventually lead with this.

Thoughts regarding webvr

August 24, 2015

I recently came across the work that mozilla have been doing on virtual reality for the web, or webvr.  This is interesting for the reason that the promise of moving beyond dynamic webpages and instead towards a fully immersive experience with cyberspace offers a tremendous opportunity for improving user experience, as well as extending the capabilities of the web to provide information services, not to mention improving user productivity.

In particular, the changes that will likely start to be seen over the next few years or so – led by organisations like facebook and maybe game developers, are quite reminiscent of William Gibson’s description of the web in Neuromancer.  It would truly be interesting if retail of the future was conducted not by the current browse and click of amazon or ebay’s current websites, but in immersive VR environments staffed either by real people, or more likely by programs running off deep learning algorithms, or other more sophisticated techniques and with more powerful back-end tooling / databases / processing architecture running the minds of VR shop assistants.  Not to mention the ways that people might use the technology not only for retail, but for community, or whimsical purposes, or means of artistic expression.

Alternatively, it is possible that this sort of change might complement the adoption of ways of viewing data presented by the hololens project, or other augmented reality initiatives.

The key problem with the work that has been done to date, however, is that the ways of interacting with the worlds therein divorce oneself dangerously from the real world around one.  At least in front of a computer one can hold a conversation, or turn and look somebody in the eye.  Perhaps it might therefore be useful, therefore, to envisage a future where instead of wearing a VR headset one has a pair of glasses with a lightweight VR computer, bionic eyes, or even a direct or indirect neural interface that would recede based on particular preferences set by an individual, or via the judgement of a proto-intelligent robot ‘cortana’ like secretary.

As a minor admonition, I think it is important that one is not suffocated by developments like this, and probably not be overly hasty about embracing nascent / young technologies.  But assuming that this is going to be the future, I think it is important to understand it, and understand the possibilities presented by it, while thinking about how to still maintain a healthy connection to one’s humanity.  So coupling the coming changes with augmented reality, the older technologies as well, and trying to shrink and detach things from getting in the way of real authentic conversations and communications, as well as real and authentic community, I think these are things that would be important to bear in mind, and I will certainly bear in my mind, were I to consider adoption of anything like this in the future – or experiment with developing toy examples with any of the associated tools or SDKs.

A few developments that I think are interesting and worth following

July 2, 2015

No particular updates on my game project at the moment unfortunately.  I have been thinking a little about a couple of other things of a slightly more abstract nature – namely model selection and generalisation of the concept of the idea of the radical of a number – but these, too, are works in progress.  Of course, Unity 5.1 is now out with the first phase of Unity’s new multiplayer networking API, but I think I’d like to take the Bolt and Zeus client-server architecture for a spin first, and see where that leads (though interestingly enough, it now appears that Bolt has been acquired by Photon, one of the bigger players in multiplayer networking who have, amongst other things, targeted Unity3D games as part of their strategy).

So instead of talking about what I’m up to currently (in terms of my hobby projects) I thought I might instead share a few findings with you, the reader, that I think are useful and important.

1. Desalination using nanoporous single-layer graphene

One of these things is a recent article (as of March 2015) published in nature nanotechnology, here, on desalination (news digest here).  There are a number of things here that are significant:

  • The use of nothing except for graphene membranes, ie, carbon, so potentially a fairly cheap thing to produce
  • The potential for an almost 100% salt rejection rate
  • The high potential throughput (up to 10^6 g m^-2 s^-1), albeit induced via an applied pressure differential

So quite exciting.  It looks like, with an appropriate implementation, that this sort of technology could lead to relatively cheap clean and fresh water (certainly cheaper than current techniques, which are based on either brute force methods (such as heating the water and collecting the vapour), or electrodialysis reversal (which has been in commercial systems since the 1960s)).  And water is fairly important to have to hand.

2. Single stage to orbit spaceplane ‘Skylon’ on track, target date 2019-2022

Reaction Engines Limited seem to be making good progress on their SSTO spaceplane concept.  This relies heavily on the Sabre engine, which through 3D printed nanotube components, has the following capabilities:

  • Can cool air from 1000 degrees celsius to -150 degrees celsius in 0.01 seconds.
  • Using proprietary methods and techniques, the critical components of the engine are impervious to frosting over at subzero temperatures.

In the last few years, the company has had the following nods of approval from various governmental bodies and organisations:

  • A 60 million pound funding injection from the British government in 2013
  • Feasibility study conducted by the ESA, leading to their seal of approval in 2014
  • Feasibility re-affirmed by the United States Air Force Research Laboratory in 2015

Furthermore, it now appears that the project is ramping up, with assistance and expertise being provided to REL from the United States, and the hiring of various people to the program with decades of experience working in the industry.  So very promising.  The target date for potential test flights of the full blown SSTO craft could be as early as 2019, with supply runs to the International Space Station by 2022.

The great thing about this project mainly is cost / kg in delivering a payload to orbit.  Building a spacecraft using these techniques could lead to stupendous gains in efficiency, decreasing costs from the current £15000/kg down to £650/kg.  In other words, this development could lead to the solar system being opened up to for potential commercial use (albeit, hopefully regulated by an appropriate global agency), and would certainly make it possible to construct interplanetary spacecraft (for manned missions say to Mars), or logistical waypoints for the support of asteroid mining operations in orbit from shuttle runs, for instance, at significantly reduced cost.  Naturally, this in turn (initial support of targeted asteroid mining operations say within the window 2030-2040) would address another problem, which is the increasing scarcity of rare earth metals (although recycling could be a partial solution there).

3. Agriculture in arid areas

Something that I continue to follow are the developments in the still as yet nascent area of farming in arid areas, using little more than seawater and sunlight, to produce food for consumption.  The reason for this interest is that there is no shortage of seawater and arid areas relatively close to the sea in the world, so there is a considerable opportunity for innovation and growth in this area.

There are a few projects in particular that I am interested in here:

Both Seawater greenhouse and Sundrop farms use a similar form of system to operate – they pump saltwater either from the water table (if close to the sea), or from the sea, into greenhouses.  Evaporation driven by solar energy then causes the water to cool the greenhouse and irrigate the plants inside.  This is a gross oversimplification, of course, and there has been decades of work done to polish this general outline, but that is the idea.  There are certain other risks of course that one needs to deal with with such an operation as well, such as maintenance costs, not to mention how one might deal with a 5 metre sea level rise, or occasional storms.  Regardless, it appears that now the technology has become mature enough in this instance to start paying dividends – in a recent press release Sundrop farms announced their partnership with Coles supermarkets.

The ISEAS project is slightly different.  It uses saltwater ponds to grow shrimp and fish for food, mangroves to purify the saltwater, and plants grown with the purified water to provide nutrients for the ponds (closing the loop) and also provide oil (in their seeds) for biofuel production.

So it looks like there is a fair bit of promise in this general direction.

Some thoughts on developments in enabling technologies over the next fifty years

May 7, 2015

Hi folks,

I thought I might share a few thoughts with you today on something that I’ve been thinking about for a while.  It is a fairly standard problem in computational mathematics, and arises naturally when dealing with solutions to partial differential equations with geometric invariants involving tensors of relatively high order (4+, eg, elasticity tensors in materials science).  Namely, the matter of solving these equations over a 2 to 4 dimensional domain using finite element methods, ie, numerically.

It is simple enough to solve Laplace’s equation or the heat equation over a square domain, but the problem comes up when one increases the number of dimensions – or when one introduces a non-trivial (read, non-flat) metric tensor, such as when exploring numerical solutions / computational simulations of solutions to the equations of general relativity.  For the former problem, such is easily solved on a desktop computer; for the latter problem, one needs a relatively powerful supercomputer.

Why? The key problem is the fact that, as one increases the number of dimensions in the classical approach to solving a system of equations numerically, the time to converge to a solution (if a solution exists, but that’s another matter, entirely, involving the growth and control of singularities) increases exponentially.  For instance, for a square domain 100 finite elements wide, one needs to perform operations on 100 x 100 units maybe 50 times until convergence.  For a cubic domain, it is 100 x 100 x 100 units, another 50 times.  For a quartic domain, it is 100^4 units, another 50 times.  So the issue is clear – it is devilishly hard to brute force solve problems of this nature.

For this reason, contemporary approaches tend to use sparse matrix methods (for ‘almost flat’ tensors) or impose a high degree of symmetry on the problem domain under exploration.  But this doesn’t really solve the underlying problem.  Surely one must be able to find a better way?

Well, maybe with quantum computers one might be able to.  A recent paper (2013), as well as this slightly older document (from 2010), suggest one can solve linear systems of equations exponentially faster using a computer of such nature.  So, if true, this is encouraging, as it would render computational problems like this slightly more tractable.  The developments at Microsoft Station Q on topological quantum computers, and the progress on more conventional quantum computers by researchers at other labs, such as IBM, who recently made a breakthrough in error correction (extending possible systems of qubits to the 13 to 17 qubit range), suggest that the era of quantum computing is not too far away – we may be less than 20 years away from desktop devices.

In a moderately more speculative vein, I am intrigued by the more futuristic possibility of going beyond quantum computing, into the domain of what one might call hyper computing.  This would deal with systems of ‘hyper qubits’ that are ‘hyperentangled’.  That’s probably a bit vague, but I sort of had in mind atomic components of the system that had multiple quantum numbers, each of which were entangled with the other atoms of the system, as well as entangled internally.  So essentially one would have two ‘strata’ / levels of entanglement.  The key idea is that it might be possible to scale computing power not linearly as with bits, or exponentially as with qubits, but as 2 tetrated by the number of ‘hyper-qubits’ hyper-entangled.  That would be a stupendous achievement, and, yes, futuristic.  At the very least, it would make problems such as what I have described above much, much easier to solve, if not outright trivial.

For a slightly better idea of how this might work, the general idea might be to entangle a system (such as a pair of photons) in N degrees of freedom, as opposed to merely one for a standard qubit, and then consider then entangling this system with M copies of same.  So there would be two strata to the machine, ie, it would be an (N, M) hyper-entangled system.  Then, if one could scale N and M to sufficiently high extent, potentially by increasing the complexity of the system’s constituent ‘atomic systems’, then I suspect one would essentially have a system that grew in terms of some number tetrated by some other number, the latter of which would grow as some monotonic increasing function of N and M.