Short Story 1: All will be repeeled

January 31, 2016

Forward

As part of a slightly new direction for this blog, I thought I might aim to write and polish a short story, or something that could be a chapter in a book, once per month.  My reasoning for this is that in terms of my other interests, they are met either by other publication venues, or have been curtailed recently by lack of time (namely, my unity project, which is also hampered by the fact that source control of unity projects is a bit temperamental, to say the least).

I also might abstain temporarily from talking about some of my favourite topics (such as gene editing, nanotechnology (including 3D printing, bioengineering, …), quantum engineering, quantum computing, the docker/kubernetes stack, miscellaneous cool open source projects, spaceplanes, nuclear fusion (including the Wendelstein 7x stellarator), sustainable agriculture, vertical farming…) so that I can refresh my mind and look with fresh eyes upon these things in another few months, or maybe more.  I’d prefer not to oversell some of these topics, or write at too much length on them.  In terms of my tastes in such regard, I think writing sparingly is the best approach; I do not particularly see myself as a fanatical evangelist in terms of any of these topics, but more of someone who likes to point out things that seem potentially important, and then move on – to leave to others with deeper knowledge and established expertise, or those who are more connected to the centre of innovation in the area, the task to push these things.

Instead, I’m going to take a break from technical writing and pivot slightly towards writing of a more fictitious bent.

So, what now?  I think, huzzah, a story!  Yes, what fun!

All will be repeeled

Three antennae Capsaisin hadn’t see an Antelope like that since kernel Custard had saturated the colony.  It was a mess; who was going to send a postcard and tell the kitchen that the mapping had not been quite surjective?  Because it hadn’t – a few survivors (Sir Jective included) had held the fort against the terrible deluge.

Nonetheless, those that had not ab-sconed for baguette country had managed to profit from the experience.  There were ample supplies now of energy to restock and rebuild.

Cap’s job was a simple one in the colony; he was a judge, of sorts.  If workers had a dispute that required compensation, he’d preside over the dispute while lawyers settled the intricacies of who hadn’t quite pulled their weight in a seed gathering expedition, or of a poor soul who had followed the pheromone trail and ended up in a puddle, and sought recompense from the expeditionary overseer for misrepresentation of trail safety.

He would have sighed, if he could have sighed.  It was a thankless task, but better than scurrying around through the hills of the never never as he had in his youth.

Today’s proceedings were slightly different, however.  Three young interlopers were seeking a repealing; in particular, of an Orange.

“I must insist that the orange be returned to its former state, so that portions can be divided appropriately,” stated the first of the three.

“Impossible.  A protest against your demands.  Entropy cannot be reversed.”

“Aha, but it is a state machine.  A Mark, of chain – see these markings?  From the wanderers of the plains.  Apache Ant.  And you yourself stated that you are pro test.”

Cap thought for a moment.  “Hmm.  So it seems that this must needs be settled.  And I have in mind an ideal for resolution.”

“Speak.” said the third, who had been quiet until then.

“You spoke of a test.  I propose a challenge, a trial.  To bring back the herd from the den of toothbrush.”

There was a hushed silence.  “You mean…?”

“Yes, those that were scattered before the event of Safeway’s Premium Custard.  That went to the wilds.  Return them, and we shall discuss how the impossible might once again be possible.”

The court adjourned with relative alacrity, as had not been seen since the match between Chemical Andrea and Turncoat Susie.  Fire would soon fly, but not before the match was lit.

To be continued…

Takeaways from a recent spark meetup

December 9, 2015

Recently I attended a Spark meetup in my home town.  I learned about a few things.   In no particular order:

Haptics, dark glasses, and a broad brimmed hat

December 7, 2015

There has been a reasonable amount of hype regarding things in this sort of area, so I’ll keep this reasonably short.  Essentially my interest in writing this communication is to indicate what I’m looking for in a portable working environment as a ‘laptop replacement’.

To cut to the chase, the key thing that I’d be after would be a VR interface hooked up to a pair of modified dark glasses, and haptics to provide me with the sensation of typing on a keyboard, or moving a mouse.

Then one could essentially have an AR/VR type interface that one could use almost anywhere, and which could run off the computer in your pocket, or a workstation at home or the office.  So quite practical, really.

2016 should see some movement in this general sort of direction, but my guess is that it’ll take 5 to 10 years for the area to truly come into its own, and maybe 10 to 15 to properly mature.

End to end testing in kubernetes

December 7, 2015

Something that I continue to be interested in is the possibility of end to end testing using kubernetes. The project has come a long way since its launch on github a little over a year ago, and it is interesting to see where it might head next. The purpose of this post is to explore the focus of my own particular fascination with it – the possibility of end to end testing complex workflows built of multiple microservices.

Currently kubernetes does have an end to end test suite, but the purpose of this seems largely to be for self service of the kubernetes project. However, there are the seeds of something with broader utility in the innards of the beast. Some contenders downstream have started building ways to render the possibility of writing end to end tests for applications easier, such as fabric8-arquillian for openshift applications. More recently, about twenty days ago, there was a pull request merged that looks like it might go a step further towards providing a more user-friendly framework for writing such tests. And the documentation for the system is starting to look promising in this regard, although a section marked examples therein is still classified as ‘todo’.

But why this obsession with end to end testing this way? Why is this so important? Why not simply as a tester write browser tests for an application?

The problem is that browser tests do not capture a lot of what is going on behind the scenes, but really only the surface layer, the user facing component of an application. To truly get more immediate feedback about whether or not a particular part of a workflow is broken, one needs something that can plug into all the backend machinery. And, in the world of the modern stack, that essentially means something that can plug into containers and trigger functionality / or monitor things in detail at will.

Kubernetes is of course not the only technology that has promise here. The more fundamental building block of Docker is starting to become quite versatile, particularly with the release of Docker 1.9, and the Docker ecosystem (Docker compose, swarm, and machine) may yet have a role to play as a worthy competitor, in the quest for the vaunted golden fleece of the end to end testing champion merino.  Indeed, anything that might well provide a practical and compelling way to end to end test an application built out of multiple containers is of interest to me, and although I am currently sitting on the kubernetes side of the fence, I could well be convinced to join the docker platform crowd.

My current thinking of a system that provides great end to end testing capability is the following: it should use docker (or rkt) containers as the most atomic components. One should have the ability to hook into these containers using a container level or orchestration service level API and activate various parts of the micro service therein which have been built in such a way as to facilitate a proper end to end test. There is however a problem here though, and it is due to the need for the security of the system to be watertight. So altering production code is understandably not particularly desirable here.

Instead, it would be great to somehow be able to hook into each docker container in the pathway of an application, and read the logs along the way (this can be done today). Also, however, it would be great to be able to do more. One idea might be to write code in the ‘test’ folder of the application next to the unit tests, that will only be run when the testing flag is set to true. Or, alternatively, maybe one could add api endpoints that are only opened in the staging environment, and are switched off in production (although this still of course may introduce unacceptable risks). If one had these additional endpoints, one could then read the state of particular variables as data flows through the application in the test, and check that everything is functioning as it should be.

The alternative to directly debugging code in this way of course is to simply log the value of a variable at a particular point, and then read the logs. Maybe this is, indeed, all that is required. Together of course with write tests for all of the standard endpoints of each container, and checking that they are working as they should be. There might be an elegant way of writing helper libraries to log something out to a docker log file for purposes of testing only, and then expose this information cleanly via some form of endpoint.

Anyway, the area of end to end testing multi container applications is quite an interesting one and I am keen to see how it develops and eventually matures. Together with monitoring tools like Prometheus, there is a large amount of potential for good work to be done in this direction and I am excited about where things might eventually lead with this.

Thoughts regarding webvr

August 24, 2015

I recently came across the work that mozilla have been doing on virtual reality for the web, or webvr.  This is interesting for the reason that the promise of moving beyond dynamic webpages and instead towards a fully immersive experience with cyberspace offers a tremendous opportunity for improving user experience, as well as extending the capabilities of the web to provide information services, not to mention improving user productivity.

In particular, the changes that will likely start to be seen over the next few years or so – led by organisations like facebook and maybe game developers, are quite reminiscent of William Gibson’s description of the web in Neuromancer.  It would truly be interesting if retail of the future was conducted not by the current browse and click of amazon or ebay’s current websites, but in immersive VR environments staffed either by real people, or more likely by programs running off deep learning algorithms, or other more sophisticated techniques and with more powerful back-end tooling / databases / processing architecture running the minds of VR shop assistants.  Not to mention the ways that people might use the technology not only for retail, but for community, or whimsical purposes, or means of artistic expression.

Alternatively, it is possible that this sort of change might complement the adoption of ways of viewing data presented by the hololens project, or other augmented reality initiatives.

The key problem with the work that has been done to date, however, is that the ways of interacting with the worlds therein divorce oneself dangerously from the real world around one.  At least in front of a computer one can hold a conversation, or turn and look somebody in the eye.  Perhaps it might therefore be useful, therefore, to envisage a future where instead of wearing a VR headset one has a pair of glasses with a lightweight VR computer, bionic eyes, or even a direct or indirect neural interface that would recede based on particular preferences set by an individual, or via the judgement of a proto-intelligent robot ‘cortana’ like secretary.

As a minor admonition, I think it is important that one is not suffocated by developments like this, and probably not be overly hasty about embracing nascent / young technologies.  But assuming that this is going to be the future, I think it is important to understand it, and understand the possibilities presented by it, while thinking about how to still maintain a healthy connection to one’s humanity.  So coupling the coming changes with augmented reality, the older technologies as well, and trying to shrink and detach things from getting in the way of real authentic conversations and communications, as well as real and authentic community, I think these are things that would be important to bear in mind, and I will certainly bear in my mind, were I to consider adoption of anything like this in the future – or experiment with developing toy examples with any of the associated tools or SDKs.

A few developments that I think are interesting and worth following

July 2, 2015

No particular updates on my game project at the moment unfortunately.  I have been thinking a little about a couple of other things of a slightly more abstract nature – namely model selection and generalisation of the concept of the idea of the radical of a number – but these, too, are works in progress.  Of course, Unity 5.1 is now out with the first phase of Unity’s new multiplayer networking API, but I think I’d like to take the Bolt and Zeus client-server architecture for a spin first, and see where that leads (though interestingly enough, it now appears that Bolt has been acquired by Photon, one of the bigger players in multiplayer networking who have, amongst other things, targeted Unity3D games as part of their strategy).

So instead of talking about what I’m up to currently (in terms of my hobby projects) I thought I might instead share a few findings with you, the reader, that I think are useful and important.

1. Desalination using nanoporous single-layer graphene

One of these things is a recent article (as of March 2015) published in nature nanotechnology, here, on desalination (news digest here).  There are a number of things here that are significant:

  • The use of nothing except for graphene membranes, ie, carbon, so potentially a fairly cheap thing to produce
  • The potential for an almost 100% salt rejection rate
  • The high potential throughput (up to 10^6 g m^-2 s^-1), albeit induced via an applied pressure differential

So quite exciting.  It looks like, with an appropriate implementation, that this sort of technology could lead to relatively cheap clean and fresh water (certainly cheaper than current techniques, which are based on either brute force methods (such as heating the water and collecting the vapour), or electrodialysis reversal (which has been in commercial systems since the 1960s)).  And water is fairly important to have to hand.

2. Single stage to orbit spaceplane ‘Skylon’ on track, target date 2019-2022

Reaction Engines Limited seem to be making good progress on their SSTO spaceplane concept.  This relies heavily on the Sabre engine, which through 3D printed nanotube components, has the following capabilities:

  • Can cool air from 1000 degrees celsius to -150 degrees celsius in 0.01 seconds.
  • Using proprietary methods and techniques, the critical components of the engine are impervious to frosting over at subzero temperatures.

In the last few years, the company has had the following nods of approval from various governmental bodies and organisations:

  • A 60 million pound funding injection from the British government in 2013
  • Feasibility study conducted by the ESA, leading to their seal of approval in 2014
  • Feasibility re-affirmed by the United States Air Force Research Laboratory in 2015

Furthermore, it now appears that the project is ramping up, with assistance and expertise being provided to REL from the United States, and the hiring of various people to the program with decades of experience working in the industry.  So very promising.  The target date for potential test flights of the full blown SSTO craft could be as early as 2019, with supply runs to the International Space Station by 2022.

The great thing about this project mainly is cost / kg in delivering a payload to orbit.  Building a spacecraft using these techniques could lead to stupendous gains in efficiency, decreasing costs from the current £15000/kg down to £650/kg.  In other words, this development could lead to the solar system being opened up to for potential commercial use (albeit, hopefully regulated by an appropriate global agency), and would certainly make it possible to construct interplanetary spacecraft (for manned missions say to Mars), or logistical waypoints for the support of asteroid mining operations in orbit from shuttle runs, for instance, at significantly reduced cost.  Naturally, this in turn (initial support of targeted asteroid mining operations say within the window 2030-2040) would address another problem, which is the increasing scarcity of rare earth metals (although recycling could be a partial solution there).

3. Agriculture in arid areas

Something that I continue to follow are the developments in the still as yet nascent area of farming in arid areas, using little more than seawater and sunlight, to produce food for consumption.  The reason for this interest is that there is no shortage of seawater and arid areas relatively close to the sea in the world, so there is a considerable opportunity for innovation and growth in this area.

There are a few projects in particular that I am interested in here:

Both Seawater greenhouse and Sundrop farms use a similar form of system to operate – they pump saltwater either from the water table (if close to the sea), or from the sea, into greenhouses.  Evaporation driven by solar energy then causes the water to cool the greenhouse and irrigate the plants inside.  This is a gross oversimplification, of course, and there has been decades of work done to polish this general outline, but that is the idea.  There are certain other risks of course that one needs to deal with with such an operation as well, such as maintenance costs, not to mention how one might deal with a 5 metre sea level rise, or occasional storms.  Regardless, it appears that now the technology has become mature enough in this instance to start paying dividends – in a recent press release Sundrop farms announced their partnership with Coles supermarkets.

The ISEAS project is slightly different.  It uses saltwater ponds to grow shrimp and fish for food, mangroves to purify the saltwater, and plants grown with the purified water to provide nutrients for the ponds (closing the loop) and also provide oil (in their seeds) for biofuel production.

So it looks like there is a fair bit of promise in this general direction.

Some thoughts on developments in enabling technologies over the next fifty years

May 7, 2015

Hi folks,

I thought I might share a few thoughts with you today on something that I’ve been thinking about for a while.  It is a fairly standard problem in computational mathematics, and arises naturally when dealing with solutions to partial differential equations with geometric invariants involving tensors of relatively high order (4+, eg, elasticity tensors in materials science).  Namely, the matter of solving these equations over a 2 to 4 dimensional domain using finite element methods, ie, numerically.

It is simple enough to solve Laplace’s equation or the heat equation over a square domain, but the problem comes up when one increases the number of dimensions – or when one introduces a non-trivial (read, non-flat) metric tensor, such as when exploring numerical solutions / computational simulations of solutions to the equations of general relativity.  For the former problem, such is easily solved on a desktop computer; for the latter problem, one needs a relatively powerful supercomputer.

Why? The key problem is the fact that, as one increases the number of dimensions in the classical approach to solving a system of equations numerically, the time to converge to a solution (if a solution exists, but that’s another matter, entirely, involving the growth and control of singularities) increases exponentially.  For instance, for a square domain 100 finite elements wide, one needs to perform operations on 100 x 100 units maybe 50 times until convergence.  For a cubic domain, it is 100 x 100 x 100 units, another 50 times.  For a quartic domain, it is 100^4 units, another 50 times.  So the issue is clear – it is devilishly hard to brute force solve problems of this nature.

For this reason, contemporary approaches tend to use sparse matrix methods (for ‘almost flat’ tensors) or impose a high degree of symmetry on the problem domain under exploration.  But this doesn’t really solve the underlying problem.  Surely one must be able to find a better way?

Well, maybe with quantum computers one might be able to.  A recent paper (2013), as well as this slightly older document (from 2010), suggest one can solve linear systems of equations exponentially faster using a computer of such nature.  So, if true, this is encouraging, as it would render computational problems like this slightly more tractable.  The developments at Microsoft Station Q on topological quantum computers, and the progress on more conventional quantum computers by researchers at other labs, such as IBM, who recently made a breakthrough in error correction (extending possible systems of qubits to the 13 to 17 qubit range), suggest that the era of quantum computing is not too far away – we may be less than 20 years away from desktop devices.

In a moderately more speculative vein, I am intrigued by the more futuristic possibility of going beyond quantum computing, into the domain of what one might call hyper computing.  This would deal with systems of ‘hyper qubits’ that are ‘hyperentangled’.  That’s probably a bit vague, but I sort of had in mind atomic components of the system that had multiple quantum numbers, each of which were entangled with the other atoms of the system, as well as entangled internally.  So essentially one would have two ‘strata’ / levels of entanglement.  The key idea is that it might be possible to scale computing power not linearly as with bits, or exponentially as with qubits, but as 2 tetrated by the number of ‘hyper-qubits’ hyper-entangled.  That would be a stupendous achievement, and, yes, futuristic.  At the very least, it would make problems such as what I have described above much, much easier to solve, if not outright trivial.

For a slightly better idea of how this might work, the general idea might be to entangle a system (such as a pair of photons) in N degrees of freedom, as opposed to merely one for a standard qubit, and then consider then entangling this system with M copies of same.  So there would be two strata to the machine, ie, it would be an (N, M) hyper-entangled system.  Then, if one could scale N and M to sufficiently high extent, potentially by increasing the complexity of the system’s constituent ‘atomic systems’, then I suspect one would essentially have a system that grew in terms of some number tetrated by some other number, the latter of which would grow as some monotonic increasing function of N and M.

Unity Networking looks like it is getting closer!

April 15, 2015

Exciting!  It looks like the first iteration of Unity Networking might be implemented in Unity 5.1.  I’m quite enthused by this, as this is the key development I’m waiting on before investing heavily in building the next version of my multiplayer networking project.  Well, not necessarily.  Recently I acquired a license for the bolt engine, which looks like a reasonable alternative contender for this sort of functionality in Unity.  So I may actually proceed with that for the time being, and potentially also the foreseeable future.

In other news, I was working quite hard to deploy a custom build of browserquest on google app engine a month or two back.  I’ll spare you, the reader, the details, but basically I found the main obstacle to be documented in this stack overflow issue, which led to the author of same raising this ticket.  So it doesn’t look like this sort of thing is possible at the moment – at least on google app engine.  On amazon elastic compute cloud though, it should certainly be quite doable to implement.  Maybe one might be able to write a lambda function for the nodejs server requests.  Now that would be an interesting learning experience.

Dungeons and Dragons – video and alpha brainstorming

February 28, 2015

Hi folks,

I just thought that I would provide a brief update as to my current thinking on my dungeons and dragons project.  As mentioned in a previous post, a key barrier to progress here was having access to a good in game level editor.  At first I thought opened was the best thing that there was in order to work with, and found it quite limited when I played around with it.  However, I recently discovered that it is possible to obtain runtime level editors from the unity asset store, relatively affordably.  So I followed through and purchased one, downloaded the data, and then ran it locally in Unity 4.6.2.  You can see the results of my investigations below:

As you can see, it is possible to raise and lower terrain, and add objects to the scene, such as buildings, trees, and jeeps, etc.  So basically everything that I was after, and more.  Furthermore, the package provides complete access to the underlying source, so it is eminently feasible to plumb additional functionality into the program, which I think is quite exciting.

Hence, it now becomes possible to start working towards a first version of my dungeons and dragons dungeon master style multiplayer game.  In particular, I think there are a number of things that I’d now like to do:

  • Plumb in the multiplayer functionality from my previous project.
  • Introduce a simple and straightforward database model for players and dungeon masters.
  • Allow players to spawn in a world in an appropriate way (without falling forever).
  • Allow the dungeon master to move players around.
  • Allow the dungeon master to switch their camera to a creature under their control and move it around.

There are other things I’d like to do, but will probably defer for a later release:

  • Allow the dungeon master to switch their view to a player, default: passive (optional: override player controls, and notify player that their controls are overriden).
  • Saving and loading levels.  The runtime level editor that I acquired does have a save and load functionality (it encodes levels as text files), but it doesn’t work quite the way I’d like it to currently.  Ideally levels should be written as blobs to a database with a secondary key being the user who generated the level so that they only have access to their level list.
  • Give the dungeon master access to master database controls, eg, a switch to reset player positions (if some of them have started to fall forever, for instance).  I’d probably like to give players a reset switch, too (but limited to themselves only, of course).

And then, in a release after that:

  • Enable persistence of player position in each of the worlds in which they have played.  So for instance if player Rodriguez has played Bob’s level ‘OrcTown’ and Jill’s level ‘CottageIndustry’, if either respective DM loads said levels and then Rodriguez logs back in, Rodriguez should appear with the position and rotation coordinates he last held while playing that level.

Plumbing in the multiplayer functionality should be relatively straightforward.  I will need to create prefabs for the players, of course, or at least migrate the ones in my earlier project across.  I will need to create any necessary prefabs for creatures that the dungeon master is to introduce into the world.  I will need to reintroduce a lobby, the ability to log in, have passwords authenticated, and create a game (to become the DM of that game) or join a game (that a player has created).  A messaging server will need to be created (using smartfox, for instance, though that may change with Unity 5), and some sensible database structure built.

On creating a game, a player should have their ‘DM’ flag in the player table set to ‘true’.  If a player joins a game, their ‘DM’ flag should be set to false.

A game should not be joinable (in the lobby) if a terrain has not been deployed.  In this instance the CanJoinGame flag in the player table for a DM who is preparing a level should be set to false, and the game should not appear in the list of available games in the lobby.  If a game is full (eg, 4/4 players), it should not appear in the list of games in the lobby either, but that is something that can be deferred until later (also, one might want to distinguish between ‘public’ and ‘private’ games.  If a game is public, anyone can join, if a game is private, only users who have been added to the dungeon master’s campaign list in PlayerCampaign should have visibility of the game.  Ultimately, too, one would like to be able to filter games in progress so that one could find the one one wished to play exclusively.

Once a player has joined a game, they should spawn just above the terrain say in the centre of the map.  Since terrain can be raised and lowered, this will be an interesting (but important) problem to solve.  Alternatively, perhaps the players could appear as tokens that the dungeon master could place (or they could place) at their will and leisure.  This might be a better way to go, where players would have a trimmed down version of the dungeon master’s interface, but they would still be able to drag and drop themselves, once, into the game.  Then they could jump between the global map view, and first person perspective from any of the creatures they control (the default being just one, their character).

This leads to the need for a fairly important feature in the first version of the game: the ability to toggle between the global controller / camera, and to avatar into a currently controlled token, and use its controller / camera.  This may be difficult to do but I think that the payoff will be worth it.

Moving tokens and objects around should be relatively straightforward, as that is already built into the base template.

Kubernetes, Docker, and Mesos

November 30, 2014

Recently I noticed a couple of interesting new developments emerging in the area of cloud deployment and distributed computing with virtual machines. A while ago, I experimented with dotcloud (in about 2011, 2012) when it was in its early start up stages, and found the tools therein a joy to use and play with setting up (and running) sites based on dotcloud’s dedicated servers. Later on, when dotcloud moved into their production phase, I let that particular venture slide, but found their release of Docker to be a further amazing development.

Docker in and of itself is very powerful, but does not afford the full flexibility and utility that dotcloud’s engine and tooling does, as their tooling allows / allowed one to assign a group of docker machines, with some form of ‘scaling’ factor, and with different groups of machines being assigned to particular microservices driving a web application. So, for instance, one might have a MongoDB database as a backend, and a java service for the frontend to implement a Spring application. Hence I was pleasantly surprised to discover, through reading the 5 year milestone blog entry for the release of the go programming language, that google has deployed a project called Kubernetes (from the Greek, meaning ‘helmsman of a ship’), also on github, that does exactly that – where one has the idea of a ‘pod’ of docker containers, with isolated volumes, and replication controllers, and microservices. All very exciting!

Both Docker and Kubernetes, it turns out, are built on Go, which is a programming language that has been designed and built / optimised specifically for cloud programming. Having discovered these new animals on github that seem to me likely to become systemically pivotal to the next steps in the web’s continued growth and evolution, I did some more reading on projects that have been implemented with Go and discovered the existence of Apache Mesos, which is essentially a way of abstracting computational resources from individual virtual machines, and is a step towards an operating system for distributed computing. Mesosphere on Github is a project consortium whose mission seems to be in line with said vision – and they have a project “Kubernetes-Mesos” that I think would be well worth watching, as well as “Marathon” which aims to be an “init” analogue for multiple VMs on a Mesos cluster, I think, may also become quite pivotal in the months and years ahead.

For a more mainstream take on these matters, there is this article at Wired that might be of interest for further reading on google’s kubernetes initiative. Here is a quote from the article in question:

“It’s part of an arms race. There are literally dozens of tools coming out,” says Solomon Hykes, the chief technology officer at Docker and the driving force behind the company’s software containers. “But Google joining that battle–with code that comes from their massive experience–helps show where this kind of thing will go.”

All quite fascinating.

But what does this mean for everyday folk or interested hobbyists like me or you?  Well, certainly, I would not mind deploying a (very small) Kubernetes cluster using rented cloud resources (eg, on GCE, Amazon EC2, Rackspace, Azure, etc) as part of a hobby project at some point.  The ability to scale is also enormously compelling, particularly for a small player who is interested in managing their OPEX to accommodate growth in their customer base.  More fully blown type implementations, such as incorporating Apache Mesos, are perhaps more suited to bigger players, say corporate type programming / systems engineering.  Of course, the number of corporations that would actually make use of the sort of tooling described in this article in the next one or two years could probably be counted on the fingers of one hand (after all, how many Twitters and Googles does the world have, today?  Not very many, I would wager).

However I am reminded of the apocryphal quotation (circa 1943) that there would maybe be a market for perhaps 5 computers in the world.  So maybe it will be the case with technological innovations such as this as well.  Maybe by 2030, we will see relatively widespread adoption of this sort of thing by large companies, even SME, or even individuals, routinely using 1000s of VMs, as a user of a computer today would not think twice of using billions of transitors today on a single machine.