Cryptography in the Quantum Computing Age

June 16, 2017

There is a problem with much of modern internet communications at the moment.  The problem is that the security of same relies on factoring being hard.  And, this is the case for a conventional computing machine.  However, for a quantum computer, such as may become available to corporations and governments within the next 5 to 10 years, it will be easy.

So this necessitates the need for a ‘refresh’ of the algorithms maintaining internet security.  So, how can we do this?

How RSA works

Well, recall the following about how RSA works (from the wikipedia entry):

  • Take two very large primes (100s of digits long) and multiply them together.   (n = pq, p and q prime).
  • Compute h = lcm(p – 1, q – 1)  (this is easy via the Euclidean algorithm, as lcm(a, b) = ab/gcd(a, b) ).
  • Choose e st e < h and gcd(e, h) = 1 (again, easy via the Euclidean algorithm).
  • Determine d st ed = 1 (mod h)  [ so that d is the modular multiplicative inverse of e ]
  • e is then the public key.  n is also released as public.
  • d is the private key.

Encryption process:

  • Bob obtain’s Alice’s public key e.  With a message M, he converts it into an integer m.  Then he computes c = m^e (mod n)

Decryption process:

  • Alice obtains Bob’s message c, and decrypts it as c^d = (m^e)^d = m (mod n)
  • Then m -> M is easy if the padding scheme is known.

The Achilles heel exposed by a quantum computer

The main point about the above that makes RSA hard to break is that factorisation is hard.  But, if factorisation is easy, Jill can intercept Bob’s ciphertext c, and, given her knowledge of n and e (which are public), factor n to p and q, compute h, and then identify d as the multiplicative inverse of e mod h.

Consequent to this, she can then decrypt Alice’s message as above.

So, if factorisation is easy, RSA is in a bit of a pickle.  But is there a better way to encrypt?

RSA over an integer polynomial ring


  • Take two very large irreducible polynomials over the integer polynomial ring Z[x] (100s of tuples long) and multiply them together.   nb. for extra security, you may want to ensure that the coefficients of these polynomials are each at least 100 digits long. (n[x] = p[x]q[x], p[x] and q[x] prime).
  • Compute h[x] = lcm(p – 1, q – 1)  (this is easy via the Euclidean algorithm for integer valued polynomials, as lcm(a, b) = ab/gcd(a, b)).
  • Choose e st e < h and gcd(e, h) = 1 (again, easy via the Euclidean algorithm).
  • Determine d[x] st e[x]d[x] = 1 (mod h[x])  [ so that d is the modular multiplicative inverse of e ]
    • Note: This is hard, unless factorisation is easy.
  • e is then the public key.  n is also released as public.
  • d is the private key.

Encryption process:

  • Bob obtain’s Alice’s public key e.  With a message M, he converts it into an integer m.  Then he computes c = m^e (mod n)

Decryption process:

  • Alice obtains Bob’s message c, and decrypts it as c^d = (m^e)^d = m (mod n)
  • Then m -> M is easy if the padding scheme is known.

Is this vulnerable to attack by a Quantum Computer eavesdropping on a channel carrying an encrypted message?

I claim not, as the size of the set of irreducible (prime) polynomials over the integer ring is much bigger than the set of primes.  The cardinality might be the same, but the complexity just seems intuitively to be much greater to me.  Factorising a semi-prime polynomial, which as factors has prime polynomials with possibly 100 or 200 terms, each with coefficients up to 150 digits in length, is, I suspect, much much more difficult than factorising a semi-prime integer.

If one had a tetra-computer (“hyper-computer”), the story might be different, but I suspect we will not have sufficiently powerful variants of those until 2050.

Towards zero latency – improved communications, coming soon to you

June 16, 2017

This recent press release got me thinking about how a quantum internet might work.  In particular, one question an observer might ask is, why do we need entanglement to transmit information?

The answer is that the ‘speed of entanglement’ is fast.  It is at least 10000 times faster than the speed of light.  Possibly the square of the speed of light.  It is quick.  But, you say, can’t I communicate quite rapidly anyway from one side of the world to the other?  Yes, but there is latency.

The bane of latency

Latency is a problem in internet communications today.  This problem is felt most acutely when attempting high bandwidth communications that require near instantaneous feedback.  And a key medium (and one of the first mediums to adopt the new technologies that consume all the bandwidth, like VR) is gaming.

Gaming where the data centre is located in another country can be a very frustrating experience.  That is where entangled communications excel.  Additionally, entangled communications have the promise of increasing bandwidth drastically, from megabits per second, to gigabits, or even terabits per second.

Entanglement is useful

By entangling a medium of communication, such as photons, and then separating same, it is possible to transmit data very quickly, and very rapidly.  However, there are a couple of obvious problems:

  • How does one ‘recharge’ one’s entanglement capacity at each network node to service the required / anticipated bandwidth?, and
  • How does one ensure one has enough bandwidth for these communications?

These are interesting questions.  Certainly, it seems intuitively reasonable that with a first pass quantum internet, at best the rate of recharge of entanglement would at best be limited by:

  • The speed of light for transportation of entangled pairs from a source; and
  • The amount of energy consumed in creating entangled pairs

Resolution of the problems

However, it does not seem unreasonable that the amount of energy required to create an entangled pair could be relatively low and fairly cheap, economically.  Which leads the matter of transporting this ‘data energy’ to where it needs to be consumed.  This is a problem that would necessitate not just distributing data energy to a single point, but rather a 2-tuple of points.  But this should be possible eg with photons, as one could use optimal waveguides to transmit them to storage at particular centres, to administer 2 to 3 times the anticipated capacity in time bucket N say between nodes K1 and K2, with redundancy a function of the amount of funding the market between K1 and K2 is willing to pay, and reliability – that is, the duration of the entanglement – also a function of same.

Ensuring duration of reservoir of entangled pairs between nodes K1 and K2

Assuming that one has transmitted entangled information between nodes K1 and K2 to said nodes respectively, it should be possible to have error correction operating at both ends to ensure the integrity of the entangled state of the ‘data energy’.  This will require an input of first order energy in order to do this, at both nodes K1 and K2.  As to details of implementation, I am not sure, but I do know that this is likely something that has likely been already subject to exhaustive study (integrity of entangled information, that is).

Concluding thoughts

This theorycraft sketch of a quantum internet still necessitates that ‘data energy’, the backbone of same, be limited in terms of transmission by the speed of light.  But this is not such a bad thing, as long as one can anticipate demand in advance.  For example, in the energy markets, one often has to plan in advance how much energy people in a particular location at a particular time will want to use.  If there is not enough, there will be blackouts.  The equivalent in the quantum internet could be fallback to the regular internet.  But, hey.  At least things would still work.  Sort of.

Experiments with APIs: a CFA fire alerts app

March 13, 2017


Over the last couple of days I thought it might be interesting to investigate writing a prototype CFA fire alerts app.  The key motivation of this was essentially the pursuit of simplicity.  Emergency Victoria do happen to have an alerts app, but it contains a plethora of different calls to various different RSS feeds and other APIs, and is not just a way for people to be notified regarding CFA fire alerts.

The CFA used to have an older app, in fact, but it was decommissioned in favour of the “improved” and more general app.

So I decided to look into trying to build my own over the last couple of days, this Labour Day weekend.

Ultimately, I was successful in my goal of building a prototype.  Essentially I built an app in Unity that I was able to compile and deploy to my android phone.  It displays a map centered on hard-coded latitude and longitude coordinates, and places a black pin anywhere on that map corresponding to a flammables related incident that the app considers to be interesting.  Also it only does this once; there is no refresh option, no buttons to push.  Very simple and utilitarian.  But a good proof of concept.


Getting the data

So how did I go about building the app?  Well, first I worked out how to get the xml file located here:, by making a C# request using Unity libraries.

Processing the data

Secondly, I deserialised that xml object into a list of incidents.

Next, I extracted all of the information from the description attribute: an additional 13 fields.

Obtaining the interesting information within the data

Consequent to this, I used my google and script-kiddie abilities to graft on some code to my project that could convert two pairs of (latitude, longitude) coordinates into a relative distance.

After that, I wrote a filter to determine what constituted an “interesting” fire: that being sufficiently close to a particular pair of hard-coded coordinates, and also with the additional qualities that the fire was not small, and not under control.

Debugging statements and breakpoints helped tremendously throughout this process.

Displaying the information to the user

So that allowed me to determine what was interesting.  But that left one final thing, figuring out how to display that to the user.

Fortunately, there was a plugin on the Unity App Store I was able to download for free, for the google static maps api.  With additional code-kinder knowhow, I was able to adapt the out of date code therein.

With further chicanery and trial and error, I was able to then graft my hard-coded lat,long coordinates to center the map about my location of interest, then push through the interesting incidents into a list of markers which would then be placed on the map as black pins.  This relied entirely upon the features of the google static maps API.

Deploying to Android

Finally I investigated deploying and building the project to android.  I found that I needed to do a number of things:

  • Install Android Studio 2.3.1.
  • Install the Android SDK for Android 6.0
  • Install the JDK from the Oracle website for Java SE 8.
  • Alter the Unity settings to point to this new JDK rather than for Java SE 7.
  • Alter the player settings within the Unity build settings so that the 32 bit display buffer was unchecked.
  • Alter the player settings within the Unity build settings so that the project had an appropriate name (com.spinningcatstudios.ChrisCFAApp)
  • Create an android manifest at ProjectRoot/Assets/Plugins/Android/manifest.xml with an appropriate format, so that I could select the appropriate Android SDK
  • Finally, in Settings/Security, I needed to allow untrusted code to run in order to install my app.

After this I was able to build the project, and debug it while connected to my development computer.  However I found that Astro File Manager could not install apk files with Android OS 6, so I needed to switch to a more modern file manager.

Future features

Various additional features that come to mind are:

  • Configurability of what constitutes an interesting event.
  • Ability to alter the zoom and focused location of the map, to see more or less.
  • Add additional data feeds (eg traffic incidents etc) with other additional filters.
  • Move from the google static maps api to the google dynamic maps api 3.0
  • Notifications for when alerts occur

Bug fixes

  • Make the map look less stretched.

Parting thoughts

All in all, this was an interesting experience.  The end result is that I have a working prototype, which I can certainly at least deploy to Android, and hopefully should be less painful to deploy to iOS.  Anyway, here’s hoping!

Mechanised farming in desert areas

January 25, 2017

I’ve been thinking recently about an engineering problem that I started considering a number of years back.  The challenge is fairly straightforward; in various parts of the world, such as Australia, there are large desert wildernesses that are essentially unlivable, but which contain vast amounts of sunlight, and land.

This suggests that one should theoretically be able to set up a series of solar energy plants to power, say, one or several massive mechanised agricultural operations.  There are certain small pilot examples where this model has already been demonstrated to work.  For instance, Sundrop Farms at Port Augusta in South Australia.  Their operation relies on solar power, proximity to the ocean, and is partly though not completely automated.

Ultimately, however, it could well be a good goal to aim to pipe water from the sea to support intensive agriculture in greenhouses many miles from open water.  This could be achieved through using solar power and batteries in many areas.  Such operations could be monitored, maintained, and harvested by drones operated by artificial intelligence.  The logistics of transporting goods to ports or highways could also be managed by autonomous machines, potentially with refrigerated interiors.  Container vessels could then transport produce as trade to regions that required such.

There are many small problems that need to be solved here though in order for something like this to be practicable at large scale.  In particular it needs to be possible to build and maintain such infrastructure autonomously.  This will require advances not only in renewable energy (which we largely already have in wind, solar, and lithium batteries), but also in robotics and artificial intelligence.  I suspect that the latter two technological categories mentioned will largely be mature enough for this sort of operation by 2030.

I think that it would be worthwhile to try to build small first and then expand, though – with potentially many, many ventures in different arid regions or locations – rather than waiting for things to become completely mature.  Although I dare say that that is where the commercial reality is, regardless.

Notes from a machine learning unconference

December 5, 2016

Here are a few dot points I took from a machine learning unconference I went to recently.

  • The STAN programming language.  You can declare variables without giving them values and then use them in calculations.  Outcomes are determined probabilistically.
  • Andrew Gellman at NYU apparently knows things about multi level models (beyond Bayesianism).
  • GAN (generative adversarial networks/models) are powerful.  Tension between minimising one thing and maximising another.  Made me think about Akaike information for the ‘max’ of an algorithm and Fisher information for the ‘min’.  Might be able to render these stable that way.
  • Kicad electronics CAD tool for microcircuits
  • Ionic transfer desalination
  • Super resolution is amazing (more GAN)
  • InfoGAN
  • LIME GAN super-resolution local to global (a model explaining a model).  See .  Might be the next step for this sort of thing (testing machine learning models)
  • Electronic frontiers Australia, citizen 4
  • Simulation to generate data, unreal CV, unity3d to train machines
  • Parseymcparseface and inception Google pretrained models
  • Imagenet

Udemy courses and a procedural generator SDK

December 1, 2016

I’ve recently taken advantage of the latest ‘black Friday’ sale on Udemy, and consequently have been taking a few courses on client-server interaction with Unity (using digitalOcean for the server), as well as database manipulation (using the MAMP stack – MacOS, Apache, MySQL, PhP).  I’ve found these courses edifying and useful.

The same fellow who has authored these courses also authored a course on procedural generation, which I am taking currently.

My hope is that I might be able to take ideas from the procedural generation course, combine these with ideas from the multiplatform runtime level editor plugin for Unity, and be able to thereby procedurally generate content from within a running game.  But in terms of the absolute bare minimum for an MVP, I guess I would be after:

  • The ability to toggle between player and DM.
  • A single view for the client, containing just a flat featureless plane.
  • An asset to drag into the scene for the DM (say, a tree).
  • The ability to extract the metadata of the scene containing the scene with the one additional asset and send it via php to a server on digitalOcean.
  • The ability to transmit from the server on receipt of a message from a client flagged as a ‘DM’ to all clients.
  • All non DMs should see their scenes update to see the tree.

This should not be too hard.  I think I have all of the necessary pieces to do this now.  After this, for MVP 2, I’d like to extend things in the following way:

MVP2 (transmission of procedurally generated scene metadata):

  • In the DM view, the ability to procedurally generate a town.
  • It should then be no harder to extract the scene metadata and transmit that to the server.
  • The non-DM views should then see their scenes update to see the town.

MVP3 (modifying assets procedurally):

  • The ability to procedurally generate terrain.  This is modifying an asset (being the terrain).
  • Transmission of the modified asset over the wire.
  • The non-DMs views should eventually update to see the modified asset.

MVP4 (limitation of privilege and additional view):

  • The DM should now have two views: the edit view, and the same view that the non-DMs see.
  • The non-DMs might also be able to see the edit view, but have limited privileges to change things therein.

Consequent to this, I’d like to see if I could create some sort of ‘procedural generator SDK’ for allowing developers to easily generate procedural generators.  The idea is that these would take several sets of assets, split into several tags (some perhaps having multiple tags per asset).  It would also customise some sort of algorithm for how these tags should be manipulated relative to each other.  Finally, I’d want to have within the SDK a way for the developer to expose a simple UI to a user.

For this next stage of the project, an MVP would simply be an SDK that created a procedure for two tags, and a ‘Go’ button.


  • Multiple tags, with possibly multiple tags per asset, and a ‘Go’ button.


  • Now with a more complex and configurable UI for the procedural generator created by the script created using the SDK.


  • A procedural generator marketplace exposed within the game.

Further thoughts regarding Unity project

November 19, 2016

It turns out that there are two key ideas that should prove useful in my investigations: text based scene files, and the asset database in Unity 5.4+.  Every time that one modifies something, one would like to take note if it was a scene change (an asset addition or reposition), or actual modification of an asset.  Since terrain is an asset, this is an important thing to take note of.

Consequently, when building the architecture, I am roughly interested in the following.  One, a set of Photon Bolt Clients talking to a Photon Zeus Server running on Amazon EC2; secondly, each client to either be tagged as a ‘Dungeon Master’ or as a ‘Player’.

Then, for a Player, the experience is essentially a multiplayer game with signals synchronised with other clients in the ‘Active World View’.

For a Dungeon Master, they have the Active World View, but also a World Modification View on another tab.  Both of these as a source of truth have the game state, which is stored as metadata (text based scene files) and binary files for relevant assets (uploaded by the database API) sitting on the Zeus server.

So the Zeus server in EC2 has the game state.  Every <interval>, say five seconds, this is broadcast to every client (pushed).  Each client then has a new game state.  This state is then loaded into the Active World View (the game).

For the Dungeon Master, they have the option to modify things in the World Modification View.  Changes here update the scene in their viewpoint (the dm game state) which is then pushed by a polling mechanism to the Amazon EC2 server, for distribution as the new game state to each client (including the Dungeon Master’s Active World View).

Say as a scenario then that the DM adds an asset.  This is then reflected as a change to the text based scene file, which is pushed up to Amazon as a fairly small payload, then pushed down to all the clients again.

Alternatively, suppose that the DM edits an asset (eg the terrain).  This is then reflected in a change to that asset in game, which then needs to be transmitted as binary data to the server (a slightly larger payload), which is then pushed down to all clients again.


It should be possible to persist a map ideally, as then things would not need to otherwise be reworked.  To do this, the DM could have a local database store for their World Modification View state, which could store the scene text file for the state when they click ‘save’, and the binary files for any assets that have been created and/or modified while in situ.  These should evidently be tagged in some easy to find way (either by player naming or timestamping).

Multiple DMs

If multiple players have edit privileges, evidently things are also a bit more complicated.  The server truely does have to own to the game state, and one needs to be able to handle race conditions / queues in terms of work being done to alter the game state.

In this model, the server broadcasts the game state to each DM client.  Then the Active World View naturally reflects this new game state.  For the World Modification View, it should theoretically be possible to update this too, but the view should not change while a DM/player is actively altering something (when they click ‘edit’).  When the player clicks ‘commit’, the changes then should be pushed to the server, and any local changes pushed by the server should then be loaded in as a pending queue of alterations.

Scenario: Say that P1 adds a tree and a house.  Then P2 add a horse and a goat.  Meanwhile P3 is altering the terrain.  P1 commits first, then P2 does.  P1’s data reaches the server after P2’s data, but the server notes that P1 made a request first and queues the requests accordingly.  The server pushes to P1, P2, and P3.  P1 and P2 then see the changes in their World Modification View.  However P3 sees no change.  P3 commits, finally happy with the terrain change.  P3 then sees P1’s change.  A moment later, they see P2’s change.  Of course, meanwhile P3’s change has been pushed to P1 and P2.

(This model is best viewed in terms of ‘git checkout -b P1/tree_house’, ‘git add tree’, ‘git add house’, ‘git commit -m ‘I added a tree and a house”, ‘git push origin P1/tree_house’, ‘git pull’).  ie analogous to source control – in fact, it is the same sort of philosophy.  In that way, multiple ‘branches’ could be spun off, before merging back into ‘master’.  But of course, that leads to:

Merge Conflicts

Say two DMs have alterations that conflict with one another.  Which gets applied?  In this case, I take the philosophy that ‘the last is always right’.  So the last becomes the new master.

Unity Game Project – some thoughts

July 17, 2016

A few months ago I revisited my Unity Game project, and re-examined the feasibility of having the ability for a dungeon master to modify a level in real time while players were running around.  The idea was for it to be quite minimal, in terms of the fact that there would be no combat mechanics, unnecessary animations, or fancy character progression / skill development / special moves etc.  Just a world in which people could wander around, with static NPCs / monsters and objects like houses and trees.

Denis Lapiner’s multiplatform level editor looked promising to me a while ago, so I looked into the mechanics of it at the aforesaid juncture.  The way it actually works, it turns out, is to create a terrain object, serialise it, then save it as a blob to local memory, which is then reloaded as a second scene.  Hence my immediate hopes of doing everything in one scene were readily dashed.

However I’ve thought a bit more about this problem more recently, and I have renewed hope that something interesting might still be possible.  I think, with some skill and cunning, it might be possible to modify the level editor scene so that two or three times a minute, a snapshot of the terrain is taken, and then the diff is applied to regenerate a preview scene – wherein also the players and everything else is pushed (as appropriate network objects) – also viewable by the dungeon master.  So, in essence, the DM would see two copies of the level (two subscreens on their screen) – one, being the level they are editing, and the second, being a non-interactive snapshot of the scene (an ‘autosaved’ version, if you will), where they can camera pan around, or even avatar into, but not interact with via the level editor.  The second screen would be updated several times a minute (or, if one had a truly powerful device – maybe not now, but at some point in the future – several times a second, so one could have a refresh framerate).  One could also imagine the DM being able to ‘minimise’ the editor portion so that they could focus on DM’ing in the ‘player’ or ‘viewer’ subscreen.

The viewer subscreen need not be totally toothless, however, as prefabs might be instantiationable within the level using the standard unity mechanics.

The players themselves would see a separate scene that just shows the current snapshot of the DM level, like the ‘viewer’ subscreen for the DM, but this would be their entire world.  They might also be able to instantiate prefabs as well, but they would not be able to sculpt or modify terrain like the DM.

In terms of the desired experience for a player, they should, one every 10 to 20 seconds or so, see the level shift around them to reflect the latest changes that the dungeon master has made.  Something that would need to be caught, however, would be the idea of where the ground is.  The player z coordinate, therefore, would need to be constantly remapped as relative to the difference in their height above the terrain at the (x, y) position they are located at.  Players would also need to be able to reposition their avatar via a third person perspective if they get stuck in a wall or an object that has suddenly appeared, too.

The interface that I plan to use to manage the networking will still be Photon Bolt, with Zeus as a server running somewhere on amazon ec2.

So that actually starts to seem like it would be vaguely doable.  And maybe, at some point in the future, with breakthroughs in personal computing, one might be able to ramp up the frame refresh rate, as well as maybe enable the experience for VR.  Possibly game engine improvements might also enable one to increase the scene refresh rate as well.

Some thoughts regarding M-theory

June 14, 2016

One of the things that I was thinking a bit more about recently was the nature of what the theory previously known as string theory is supposed to be about.  There is some murky intuition – that Feynman diagrams should somehow be lifted to dealing with Teichmuller spaces, and the amplituhedron, and such like.  But it has occurred to me that this is rather a straw concern for what is really at the heart of the matter, or the structure that people are concerned about here.  For it seems that there is a category error, so to speak, or a particular cognitive fallacy, in that the pre-school analogy for the structure associated to what people are seeking to study, is viewed to be erroneously one and the same as the true underlying structure.

This is not obvious, but it has become slightly clearer to me now over the years, now that I have had time to properly study exotic geometries and their associated geometric invariants, information theory, and higher category theory.  Unfortunately, the malaise associated to this has become a rather peculiar game of information and disinformation – it is, in fact, hard to tell where the pieces sit.  But, as always, it is helpful to be guided by good mathematical and physical intuition.  Truth is truth.

So, what is string theory / M theory – what is it actually supposed to be about?  Well certainly one simple statement that I could make would be that it is an extension of L theory (concerning which there are many excellent results due to Ranicki), which in turn is an extension of the K theory which many deeply talented 20th century mathematicians have contributed to, such as Atiyah, the Bourbaki society, and Grothendieck.

But that is just replacing a term with a term.  So, what is K theory?  It is the cohomology of schemes.  And what is cohomology?  That tells you roughly how to categorise a differential form in a local chart on a manifold.  A scheme, per my admittedly simplistic viewpoint, can be viewed as an object which, within a local chart, may be associated to the twisting of two algebraic varieties (which can be viewed as charts of an Alexandrov space).  So, if {(x, y, z) | f(x,y,z) = 0} is one variety, and the other is {(x, y, z) | g(x, y, z) = 0}, then a scheme might be given by {(x, y, z) | \circ (f ; g)(x, y, z) = 0 }, where \circ (f ; g) is the composition of f by itself g times.  And there is in fact more than one way to construct an exotic twisting of f by g (there are, in fact, three): we can also act on f by g with repeated multiplication, or with repeated exponentiation.

So that is K theory, and that is schemes, or 1-schemes.  So, what about L theory?  Well, L theory concerns itself with 2-schemes, which can be constructed by the twisting of three 1-schemes together in one of five different ways – with operators \circ, \circ_(2) (higher order composition / tetrated composition), \star, \wedge, and \wedge_(2) (higher order exponentiation / tetrated exponentiation, or pentation).

Hence we have 2-schemes, and L theory.  There is a deep, deep well of structure waiting to be explored in that area alone.  But what about M theory?  Well, M theory concerns itself with 3-schemes, which are constructed by the twisting of five 2-schemes together in one of seven different ways – with the operators above, as well as \circ_(3) (pentated composition) and \wedge_(3) (pentated exponentiation, or heptation).

However, this is a slight simplification, because we also have exotic operators that act on 3-tuples of functions (rather than 2-tuples as above), as well as on 5-tuples of functions (these operators are related to pre-cursor geometries for cybernetic and meta-cybernetic structures, respectively).

Evidently the theory here is fantastically, wildly, stupidly deep.  The structure is gloriously fascinating and elegant in its form.  And, with this viewpoint, one is now prepared to read the papers of the leading lights of the field with a new perspective.

Additionally, however, we can ask the question – why jump from K theory straight to M theory? What is the reasoning for that?

I can hypothesise an answer – it is because, perhaps, that M theory is a 3-categorical construction – it makes most sense in the language of 3 categories.  Just as K theory deals with 1-categories, and function spaces as being the atomic objects, and L theory deals with 2-categories, and function-function spaces as being atomic (ie, functionals forming a basis for the space), we have that M theory deals with function-function-function spaces, or meta-functionals as a basis for the space.  Hence, the geometric invariants of M-theory are meta-meta-functionals.

Now, for reasons that I think it might be possible to bludgeon into logic, one might by a peculiar association conflate the four levels of subtlety traversed so far – ordinary set theory, or 0-category theory, through to 3-category theory – as being part and parcel with the dimensions of ordinary space.  So therefore the study of M theory is motivated as one can view the structure or scaffolding of the abstraction in itself as manifesting the underlying structure of reality – or the meta-information associated to the theory.  Which is really, really weird if you think about it.  But maybe this is the case.

Another reason that the jump from K theory to M theory might have been made was for some reason of strange quixotic quest to discover a ‘theory of everything’.  But the problem is, this is only the fourth level of subtlety – there are a fair few more steps to traverse before one gets to aleph null.  Another four more, at the very least!

So that is my current thinking regarding this particular circus.  L theory and M theory, with their associated higher categorical abstractions, are areas of mathematics that are nowhere near as deeply plumbed as K theory, and which have at least as much structure, if not more.  Since K theory concerned a generation of the finest minds on the planet for a number of decades, I can see that these other areas could definitely warrant a certain degree of attention and respect.

It is perhaps unfortunate that such noble structures as these have been given short shrift in terms of the way they have been portrayed as to their applications to physics.  There are probably two reasons that things have become slightly strange here – one is that the public at large, and, in particular, the educated public, are a touch or two smarter and informed regarding physics than most full time intellectuals might be prepared to give credence.  However, they are not perhaps sufficiently interested in the large as to reason their way through the stick figure interpretation of the phenomenology associated to higher categorical descriptions of reality, and actually see that there might be more to it than ‘vibrating modes of a ‘string”.  The fact is, to my mind, there are no strings – this is just an artifact perhaps of conflating the arrows in a commutative diagram within higher category theory with actual physical objects, combined perhaps with some confusion as to tubes in Feynman diagrams.

The other reason is that the mainstay of academics that concern themselves with this sort of thing are perhaps not as informed as perhaps they ought to be.

Indeed, it is a pity, because the discipline suffers for the lack of an adequate popular science type explanation for many of these things, as well as a faithful portrayal of the associated objects.

A further confusion is that the structures themselves should describe the notion as to how to define invariants for them.  Or maybe that the way that invariants or physical principles are developed is considered as part of the theory.  But the received wisdom on this seems to have a certain lack of narrative in regards to its positioning, which can lead to confusion.  For instance, there is a fair bit of promising work in other areas of physics, such as within the information sciences, and machine learning, that could inform the discussion as to how to derive from first principles geometric invariants for a given structure.

A very foolish brain dump

April 2, 2016

Well, it is no longer really April fools day in many parts of the world, but I guess it is in the states, so I can probably get away with the above.  In short I thought I would write about a number of things that I think are interesting or that I’m thinking about, and maybe then sketch a short story idea or two.

Unity package / feature request

Something that I’m still interested in is a level editor for Unity3D that allows people to edit a level in real time while other players are running around within it.  Sort of like roll20, but in 3D.  There are a few technologies that I think are reaching towards this – the multiplatform level editor plugin here, and the scene fusion multi user unity editing plugin here.

The latter technology looks very useful for building games (as collaborative development and improved team productivity is really the market that it is angled towards, and it looks tremendously useful for that), and probably would double as sufficient for the purposes of moving people around a world and pretending that one was in a Roll20 style role playing game sandbox.  So I could certainly imagine that if there was a ‘master editor’ or admin user who could lock down certain parts of the unity editor then that would probably be enough for my use case.

The former technology does not extend unity but rather runs in game.  It allows the user to edit a scene at runtime and save the level to a serialised format that can then be loaded and run.  There have been many improvements to this over the last couple of years, however it does not really support what I’m after, which is the ability for somebody to add things to a scene while the ‘play’ mode of the game is active, and then export the current state of the level to a serialised format (after the manner of a ‘saved game state’).

Largely speaking, however, the code for the former is available and quite malleable to forking, so it is possible that it could be modified to allow me to then plug in something like the Bolt/Zeus networking stack (download links here) to support a multiplayer scenario.  However, I am lazy (and also time poor), so I find it much more efficient merely to write about what I’m after.  Scene Fusion probably would support what I want but is overkill for what I’m looking for, and I also would not have visibility of the code in the way that one can for Bolt and the multiplatform level editor.  I probably also wouldn’t be able to tinker with the networking servers in the same way that I could run Zeus on a rightscale managed amazon ec2 instance.

Ultimately I’m not after purchasing a game, or a piece of software that is a black box to me, but rather I’m keen to use something that I can tinker with, recompile the individual components, and learn a bit from.  Hopefully that makes sense.


I’ve made a bit more progress with a paper that I’ve been working on.  In particular I managed to prove existence and uniqueness of a lift of a particular function to the reals, and also came up with an interesting identity for a generalisation of another function, which may or may not be true.  Over the next month or two I hope to make a bit more progress.

Additionally I’ve been considering potentially to publish a presentation that I gave at a conference a few years ago, but I’ve been procrastinating a little in regards to that.

Quantum machine learning and the Fredkin gate

So apparently recently I discovered that quantum machine learning is a thing.  My interest in this area was piqued when I started wondering what might be next for AI after the triumph of DeepMind’s deep learning expert trained to play Go.

In particular it has struck me (and others) that with the advent of quantum computers within the next ten years or so, it might be possible to take the architectural subtlety of machine learning to new heights, and get closer to that goal of AI, for machines that truly think.  Indeed, many are arguing that this is the ‘killer application’ for quantum computing.  I’m slightly late off the mark here, since the inception of the field was a couple of years ago in 2014, but this is still a fairly young discipline and I think many exciting things lie ahead for it.

There are essentially two key observations in this respect: observation the first, that existing algorithms could run much, much faster on a quantum computer and with much, much more data.  However, much more exciting is observation the second: that quantum computers would allow one to construct much more complex and intricate algorithms for supervised and unsupervised learning.

Towards the objective of actually building such a device, there was a recent breakthrough achieved where researchers managed to construct a quantum Fredkin gate.  The reason this is exciting is because Fredkin gates are atomic to conventional computers: any logic operation can be devised by stringing a whole series of them together (rather like NAND gates in logic diagrams).

Of course this might not necessarily be ‘the’ critical advance but quantum computing is an area that is certainly seeing a large amount of progress at the moment.  Doubtless people are hungry for more computational power, beyond what is achievable with classical computers – and the time seems perhaps right, with the slow down currently being experienced in Moore’s law for run-of-the-mill circuitry.

Google compute cloud

Another thing that I recently learned was how deeply subtle and sophisticated google’s cloud computing operation has become.  With products such as bigtable, redshift, dataflow (based on Apache Beam) and others, Google has become truly a formidable contender in the area from a relatively late start on Amazon’s established lead just a few years ago.

I will not write much here because I do not understand too much about what google has to offer, but I thought I would point out how quickly it appears that this area has moved, and how sophisticated cloud computing services have become.  The price points do not seem too bad, either – I learned the other day that one can hire a t2.nano from Amazon now for about $3.50 USD a month, which is amazingly cheap (although, of course, you get what you pay for, which, in the case of a t2.nano, is next to nothing at all).

Automated testing of virtual reality applications or ‘websites’

As a tester (my current gig) I have been wondering as to what automation testing might look like for a virtual reality based application or website.  I think it could be quite interesting to write tests for such beasts.

Towards a ‘static’ virtual reality website on the web

I am also interested as to what a ‘static’ virtual reality website might look like.  In particular it would be nice to have say a room with bulletin boards on the walls and clickable links, or the ability to read a newspaper lying on a virtual table, for instance.

Integration testing of microservices

I’ve noticed that there is a trend in microservices testing towards using docker compose and jenkins to run tests based on a build pull request, to spin up a container for the project, and also containers for adjacent projects.  The idea is then that one hits endpoints on these boxes to verify that APIs are working correctly.  This is something that I would like to see better documentation for on the web, but so far most of the knowledge seems to be locked up in the heads of specialists, and there are not many public projects or tutorials (or courses!) geared towards teaching people how to do this form of blackbox testing.  Hopefully in the next year or two there will be some more chatter on the web in regards to this area, because it is certainly something that I’d like to do a bit more of.

Story ideas

In want of writing a story I thought I might write a few (short) story ideas.  Maybe I’ll post a short story to this blog tomorrow to make up for sidestepping my previous promise in said regard.

  • I thought it might be interesting to write something based in a world where some corporation exerts tremendous control.  Over time, the protagonist eventually finds their way to the executive of this company, and discovers a terrifying secret: that the Chief Executive Officer is actually just a magic 8-ball named Terence.  ‘The shareholders love him.’
  • A horse play.  A children’s story written as a play, where the main contenders are essentially just horses that do not behave themselves particularly well.
  • Writing about the present from the perspective of the past (eg, the way SF authors in the 1960s viewed today), which inserting incongruous references.  For instance, writing about the early years of the 21st century in a moon base, and then making reference to a contemporary event that actually has happened.