Archive for the ‘Uncategorized’ Category

Consider not what is lost, but what is GAN

July 7, 2017

A generative adversarial network is a set of two neural networks competing against one another in a zero sum game (todo: consider game theoretic implications).  Classically one is a generator, and one is a discriminator.  The generator tries to fool the discriminator.  However this situation can become unstable and the algorithm can fail to converge to interpret / generate digits, say.

There is another recent paper http://journal.frontiersin.org/article/10.3389/fncom.2017.00048/full on cliques of neurons and their role in consciousness, a paper that draws links between algebraic topology and intelligence.  In particular, it indicates that there are stratified layers of sets of neurons that can be up to 11 strata deep.

So the question I pose is twofold: one, is there a way to avoid instability in a classical GAN and also extend it to broader classes of problems; and two, can we consider the latter paper’s analogue in AI to be stratified layers of sets of neurons in an underlying ‘space’ of potential usable neurons, that can be up to 11 strata deep, and which can be created, read, updated and destroyed, as time buckets continue?  Maybe we could implement using dind (docker in docker).  All sorts of possibilities.

I think this broader idea of ‘GAN’ would be useful, as it would allow us to remove neuron sets that are underperforming / have become unstable, and also allow for new models to arise in a Conways Game of Life fashion out of the morass.

But how do we measure, how do we update, what data are we measuring against?  All excellent questions.

Perhaps then we need to consider the set of possible usable neurons as living in a module that has an input (a set of performance measures at time step T) and an output (a set of the same performance measures at time step T+1).  Based on this difference we can apply a feedback loop to the universe of inputs.  Maybe we output the algebraic structure of the set of adversarial subnets in the universal net as a json object say (with the model parameters included eg matrix representations etc, so

{json(T)} as input, and {json(T+1)} as output

where some nodes from the json file might be removed, and others added.

eg. { {{ [network including 2 sets of 4 nodes, 2 layers deep], A_ij }, { [network including 3 sets of 3 nodes, 3 layers deep], B_ij } }, { [network with 2 x 4 nodes ], D_ij } }(T)

{ { [ network including 2 sets of 4 nodes, 2 layers deep ], A_ij }, { [network including 3 sets of 2 nodes, 3 layers deep], C_ij } }(T+1).

One could have any sort of network within the universal neural space, eg a CNN, an ordinary hidden markov model, etc.

So we have a big box, with json objects and current training parameters of same being spat in and out of the box.

The training parameters of the transient submodels could then be placed inside some form of distance measure relative to the problem we are optimising for (i.e., relative to one’s timeboxed dataset or evolving input).  It strikes me that this sort of approach, although difficult to implement, could potentially lead to applications much more wideranging than image manipulation.

A further generalisation would be to allow the model to create an output timeboxed dataset, and allow that to form an extrapolation of previous input or data.  This sort of approach might be useful, for instance, for performing fairly demanding creative work.  One would naturally of course need to seed the task with an appropriate starting point, say a collection of ‘thoughts’ (or rather their network representation) inhabiting a network of cliques in the universal neural lattice.

 

The current state of open source AI

July 2, 2017

I recently discovered the company OpenAI, which has a significant endowment, and has a couple of notable projects that enable work on training / testing AI algorithms.  In particular, these projects are Gym (https://github.com/openai/gym) and Universe (https://github.com/openai/universe).  Gym is based on this paper (https://arxiv.org/abs/1606.01540) which is essentially a toolkit for benchmarking the performance of various AI algorithms in reinforcement learning.  From the project page:

gym makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano.

Universe fits in with Gym, in that Gym provides the interface to Universe, which provides an AI algorithm with an environment within which to train.  An example of this is given on the project page, which essentially demonstrates use of Universe and Gym to test an agent running the game DuskDrive in a Docker container:

import gym
import universe  # register the universe environments

env = gym.make('flashgames.DuskDrive-v0')
env.configure(remotes=1)  # automatically creates a local docker container
observation_n = env.reset()

while True:
  action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n]  # your agent here
  observation_n, reward_n, done_n, info = env.step(action_n)
  env.render()

Evidently this is potentially quite powerful.  Indeed, from the main promotional page for universe (https://universe.openai.com/), apparently there are already more than 1000 environments to test / train learning agents against, which apparently many many more in the works.

To get started, one could use the openai code in the projects https://github.com/openai/improved-gan or https://github.com/openai/universe-starter-agent, however, those with a more sophisticated bent might instead be interested in investigating DeepMind’s project https://github.com/deepmind/learning-to-learn, which makes use of Google Tensorflow and DeepMind Sonnet (https://github.com/deepmind/sonnet).  Sonnet is a Tensorflow neural network library.

Towards increasingly unorthodox approaches, I think it would be fascinating to watch the space around the development of algorithms to take advantage of the architecture of IBM’s neuromorphic chips (based I believe on Field Programmable Gate Arrays), or looking a bit further out, towards the opportunities based in machine learning associated to Topological Quantum Computers (Majorana fermion based), and possibly same with a slight Qudit boost (hybrid/pseudo tetracomputer, maybe with switches consisting up to a 100 states).

I will continue to follow developments in this area with some interest.

Cryptography in the Quantum Computing Age

June 16, 2017

There is a problem with much of modern internet communications at the moment.  The problem is that the security of same relies on factoring being hard.  And, this is the case for a conventional computing machine.  However, for a quantum computer, such as may become available to corporations and governments within the next 5 to 10 years, it will be easy.

So this necessitates the need for a ‘refresh’ of the algorithms maintaining internet security.  So, how can we do this?

How RSA works

Well, recall the following about how RSA works (from the wikipedia entry):

  • Take two very large primes (100s of digits long) and multiply them together.   (n = pq, p and q prime).
  • Compute h = lcm(p – 1, q – 1)  (this is easy via the Euclidean algorithm, as lcm(a, b) = ab/gcd(a, b) ).
  • Choose e st e < h and gcd(e, h) = 1 (again, easy via the Euclidean algorithm).
  • Determine d st ed = 1 (mod h)  [ so that d is the modular multiplicative inverse of e ]
  • e is then the public key.  n is also released as public.
  • d is the private key.

Encryption process:

  • Bob obtain’s Alice’s public key e.  With a message M, he converts it into an integer m.  Then he computes c = m^e (mod n)

Decryption process:

  • Alice obtains Bob’s message c, and decrypts it as c^d = (m^e)^d = m (mod n)
  • Then m -> M is easy if the padding scheme is known.

The Achilles heel exposed by a quantum computer

The main point about the above that makes RSA hard to break is that factorisation is hard.  But, if factorisation is easy, Jill can intercept Bob’s ciphertext c, and, given her knowledge of n and e (which are public), factor n to p and q, compute h, and then identify d as the multiplicative inverse of e mod h.

Consequent to this, she can then decrypt Alice’s message as above.

So, if factorisation is easy, RSA is in a bit of a pickle.  But is there a better way to encrypt?

RSA over an integer polynomial ring

Process:

  • Take two very large irreducible polynomials over the integer polynomial ring Z[x] (100s of tuples long) and multiply them together.   nb. for extra security, you may want to ensure that the coefficients of these polynomials are each at least 100 digits long. (n[x] = p[x]q[x], p[x] and q[x] prime).
  • Compute h[x] = lcm(p – 1, q – 1)  (this is easy via the Euclidean algorithm for integer valued polynomials, as lcm(a, b) = ab/gcd(a, b)).
  • Choose e st e < h and gcd(e, h) = 1 (again, easy via the Euclidean algorithm).
  • Determine d[x] st e[x]d[x] = 1 (mod h[x])  [ so that d is the modular multiplicative inverse of e ]
    • Note: This is hard, unless factorisation is easy.
  • e is then the public key.  n is also released as public.
  • d is the private key.

Encryption process:

  • Bob obtain’s Alice’s public key e.  With a message M, he converts it into an integer m.  Then he computes c = m^e (mod n)

Decryption process:

  • Alice obtains Bob’s message c, and decrypts it as c^d = (m^e)^d = m (mod n)
  • Then m -> M is easy if the padding scheme is known.

Is this vulnerable to attack by a Quantum Computer eavesdropping on a channel carrying an encrypted message?

I claim not, as the size of the set of irreducible (prime) polynomials over the integer ring is much bigger than the set of primes.  The cardinality might be the same, but the complexity just seems intuitively to be much greater to me.  Factorising a semi-prime polynomial, which as factors has prime polynomials with possibly 100 or 200 terms, each with coefficients up to 150 digits in length, is, I suspect, much much more difficult than factorising a semi-prime integer.

If one had a tetra-computer (“hyper-computer”), the story might be different, but I suspect we will not have sufficiently powerful variants of those until 2050.

Towards zero latency – improved communications, coming soon to you

June 16, 2017

This recent press release got me thinking about how a quantum internet might work.  In particular, one question an observer might ask is, why do we need entanglement to transmit information?

The answer is that the ‘speed of entanglement’ is fast.  It is at least 10000 times faster than the speed of light.  Possibly the square of the speed of light.  It is quick.  But, you say, can’t I communicate quite rapidly anyway from one side of the world to the other?  Yes, but there is latency.

The bane of latency

Latency is a problem in internet communications today.  This problem is felt most acutely when attempting high bandwidth communications that require near instantaneous feedback.  And a key medium (and one of the first mediums to adopt the new technologies that consume all the bandwidth, like VR) is gaming.

Gaming where the data centre is located in another country can be a very frustrating experience.  That is where entangled communications excel.  Additionally, entangled communications have the promise of increasing bandwidth drastically, from megabits per second, to gigabits, or even terabits per second.

Entanglement is useful

By entangling a medium of communication, such as photons, and then separating same, it is possible to transmit data very quickly, and very rapidly.  However, there are a couple of obvious problems:

  • How does one ‘recharge’ one’s entanglement capacity at each network node to service the required / anticipated bandwidth?, and
  • How does one ensure one has enough bandwidth for these communications?

These are interesting questions.  Certainly, it seems intuitively reasonable that with a first pass quantum internet, at best the rate of recharge of entanglement would at best be limited by:

  • The speed of light for transportation of entangled pairs from a source; and
  • The amount of energy consumed in creating entangled pairs

Resolution of the problems

However, it does not seem unreasonable that the amount of energy required to create an entangled pair could be relatively low and fairly cheap, economically.  Which leads the matter of transporting this ‘data energy’ to where it needs to be consumed.  This is a problem that would necessitate not just distributing data energy to a single point, but rather a 2-tuple of points.  But this should be possible eg with photons, as one could use optimal waveguides to transmit them to storage at particular centres, to administer 2 to 3 times the anticipated capacity in time bucket N say between nodes K1 and K2, with redundancy a function of the amount of funding the market between K1 and K2 is willing to pay, and reliability – that is, the duration of the entanglement – also a function of same.

Ensuring duration of reservoir of entangled pairs between nodes K1 and K2

Assuming that one has transmitted entangled information between nodes K1 and K2 to said nodes respectively, it should be possible to have error correction operating at both ends to ensure the integrity of the entangled state of the ‘data energy’.  This will require an input of first order energy in order to do this, at both nodes K1 and K2.  As to details of implementation, I am not sure, but I do know that this is likely something that has likely been already subject to exhaustive study (integrity of entangled information, that is).

Concluding thoughts

This theorycraft sketch of a quantum internet still necessitates that ‘data energy’, the backbone of same, be limited in terms of transmission by the speed of light.  But this is not such a bad thing, as long as one can anticipate demand in advance.  For example, in the energy markets, one often has to plan in advance how much energy people in a particular location at a particular time will want to use.  If there is not enough, there will be blackouts.  The equivalent in the quantum internet could be fallback to the regular internet.  But, hey.  At least things would still work.  Sort of.

Experiments with APIs: a CFA fire alerts app

March 13, 2017

Background

Over the last couple of days I thought it might be interesting to investigate writing a prototype CFA fire alerts app.  The key motivation of this was essentially the pursuit of simplicity.  Emergency Victoria do happen to have an alerts app, but it contains a plethora of different calls to various different RSS feeds and other APIs, and is not just a way for people to be notified regarding CFA fire alerts.

The CFA used to have an older app, in fact, but it was decommissioned in favour of the “improved” and more general app.

So I decided to look into trying to build my own over the last couple of days, this Labour Day weekend.

Ultimately, I was successful in my goal of building a prototype.  Essentially I built an app in Unity that I was able to compile and deploy to my android phone.  It displays a map centered on hard-coded latitude and longitude coordinates, and places a black pin anywhere on that map corresponding to a flammables related incident that the app considers to be interesting.  Also it only does this once; there is no refresh option, no buttons to push.  Very simple and utilitarian.  But a good proof of concept.

Method

Getting the data

So how did I go about building the app?  Well, first I worked out how to get the xml file located here: https://data.emergency.vic.gov.au/Show?pageId=getIncidentRSS, by making a C# request using Unity libraries.

Processing the data

Secondly, I deserialised that xml object into a list of incidents.

Next, I extracted all of the information from the description attribute: an additional 13 fields.

Obtaining the interesting information within the data

Consequent to this, I used my google and script-kiddie abilities to graft on some code to my project that could convert two pairs of (latitude, longitude) coordinates into a relative distance.

After that, I wrote a filter to determine what constituted an “interesting” fire: that being sufficiently close to a particular pair of hard-coded coordinates, and also with the additional qualities that the fire was not small, and not under control.

Debugging statements and breakpoints helped tremendously throughout this process.

Displaying the information to the user

So that allowed me to determine what was interesting.  But that left one final thing, figuring out how to display that to the user.

Fortunately, there was a plugin on the Unity App Store I was able to download for free, for the google static maps api.  With additional code-kinder knowhow, I was able to adapt the out of date code therein.

With further chicanery and trial and error, I was able to then graft my hard-coded lat,long coordinates to center the map about my location of interest, then push through the interesting incidents into a list of markers which would then be placed on the map as black pins.  This relied entirely upon the features of the google static maps API.

Deploying to Android

Finally I investigated deploying and building the project to android.  I found that I needed to do a number of things:

  • Install Android Studio 2.3.1.
  • Install the Android SDK for Android 6.0
  • Install the JDK from the Oracle website for Java SE 8.
  • Alter the Unity settings to point to this new JDK rather than for Java SE 7.
  • Alter the player settings within the Unity build settings so that the 32 bit display buffer was unchecked.
  • Alter the player settings within the Unity build settings so that the project had an appropriate name (com.spinningcatstudios.ChrisCFAApp)
  • Create an android manifest at ProjectRoot/Assets/Plugins/Android/manifest.xml with an appropriate format, so that I could select the appropriate Android SDK
  • Finally, in Settings/Security, I needed to allow untrusted code to run in order to install my app.

After this I was able to build the project, and debug it while connected to my development computer.  However I found that Astro File Manager could not install apk files with Android OS 6, so I needed to switch to a more modern file manager.

Future features

Various additional features that come to mind are:

  • Configurability of what constitutes an interesting event.
  • Ability to alter the zoom and focused location of the map, to see more or less.
  • Add additional data feeds (eg traffic incidents etc) with other additional filters.
  • Move from the google static maps api to the google dynamic maps api 3.0
  • Notifications for when alerts occur

Bug fixes

  • Make the map look less stretched.

Parting thoughts

All in all, this was an interesting experience.  The end result is that I have a working prototype, which I can certainly at least deploy to Android, and hopefully should be less painful to deploy to iOS.  Anyway, here’s hoping!

Mechanised farming in desert areas

January 25, 2017

I’ve been thinking recently about an engineering problem that I started considering a number of years back.  The challenge is fairly straightforward; in various parts of the world, such as Australia, there are large desert wildernesses that are essentially unlivable, but which contain vast amounts of sunlight, and land.

This suggests that one should theoretically be able to set up a series of solar energy plants to power, say, one or several massive mechanised agricultural operations.  There are certain small pilot examples where this model has already been demonstrated to work.  For instance, Sundrop Farms at Port Augusta in South Australia.  Their operation relies on solar power, proximity to the ocean, and is partly though not completely automated.

Ultimately, however, it could well be a good goal to aim to pipe water from the sea to support intensive agriculture in greenhouses many miles from open water.  This could be achieved through using solar power and batteries in many areas.  Such operations could be monitored, maintained, and harvested by drones operated by artificial intelligence.  The logistics of transporting goods to ports or highways could also be managed by autonomous machines, potentially with refrigerated interiors.  Container vessels could then transport produce as trade to regions that required such.

There are many small problems that need to be solved here though in order for something like this to be practicable at large scale.  In particular it needs to be possible to build and maintain such infrastructure autonomously.  This will require advances not only in renewable energy (which we largely already have in wind, solar, and lithium batteries), but also in robotics and artificial intelligence.  I suspect that the latter two technological categories mentioned will largely be mature enough for this sort of operation by 2030.

I think that it would be worthwhile to try to build small first and then expand, though – with potentially many, many ventures in different arid regions or locations – rather than waiting for things to become completely mature.  Although I dare say that that is where the commercial reality is, regardless.

Notes from a machine learning unconference

December 5, 2016

Here are a few dot points I took from a machine learning unconference I went to recently.

  • The STAN programming language.  You can declare variables without giving them values and then use them in calculations.  Outcomes are determined probabilistically.
  • Andrew Gellman at NYU apparently knows things about multi level models (beyond Bayesianism).
  • GAN (generative adversarial networks/models) are powerful.  Tension between minimising one thing and maximising another.  Made me think about Akaike information for the ‘max’ of an algorithm and Fisher information for the ‘min’.  Might be able to render these stable that way.
  • Kicad electronics CAD tool for microcircuits
  • Ionic transfer desalination
  • Super resolution is amazing (more GAN)
  • InfoGAN
  • LIME GAN super-resolution local to global (a model explaining a model).  See https://github.com/marcotcr/lime .  Might be the next step for this sort of thing (testing machine learning models)
  • Electronic frontiers Australia, citizen 4
  • Simulation to generate data, unreal CV, unity3d to train machines
  • Parseymcparseface and inception Google pretrained models
  • Imagenet

Udemy courses and a procedural generator SDK

December 1, 2016

I’ve recently taken advantage of the latest ‘black Friday’ sale on Udemy, and consequently have been taking a few courses on client-server interaction with Unity (using digitalOcean for the server), as well as database manipulation (using the MAMP stack – MacOS, Apache, MySQL, PhP).  I’ve found these courses edifying and useful.

The same fellow who has authored these courses also authored a course on procedural generation, which I am taking currently.

My hope is that I might be able to take ideas from the procedural generation course, combine these with ideas from the multiplatform runtime level editor plugin for Unity, and be able to thereby procedurally generate content from within a running game.  But in terms of the absolute bare minimum for an MVP, I guess I would be after:

  • The ability to toggle between player and DM.
  • A single view for the client, containing just a flat featureless plane.
  • An asset to drag into the scene for the DM (say, a tree).
  • The ability to extract the metadata of the scene containing the scene with the one additional asset and send it via php to a server on digitalOcean.
  • The ability to transmit from the server on receipt of a message from a client flagged as a ‘DM’ to all clients.
  • All non DMs should see their scenes update to see the tree.

This should not be too hard.  I think I have all of the necessary pieces to do this now.  After this, for MVP 2, I’d like to extend things in the following way:

MVP2 (transmission of procedurally generated scene metadata):

  • In the DM view, the ability to procedurally generate a town.
  • It should then be no harder to extract the scene metadata and transmit that to the server.
  • The non-DM views should then see their scenes update to see the town.

MVP3 (modifying assets procedurally):

  • The ability to procedurally generate terrain.  This is modifying an asset (being the terrain).
  • Transmission of the modified asset over the wire.
  • The non-DMs views should eventually update to see the modified asset.

MVP4 (limitation of privilege and additional view):

  • The DM should now have two views: the edit view, and the same view that the non-DMs see.
  • The non-DMs might also be able to see the edit view, but have limited privileges to change things therein.

Consequent to this, I’d like to see if I could create some sort of ‘procedural generator SDK’ for allowing developers to easily generate procedural generators.  The idea is that these would take several sets of assets, split into several tags (some perhaps having multiple tags per asset).  It would also customise some sort of algorithm for how these tags should be manipulated relative to each other.  Finally, I’d want to have within the SDK a way for the developer to expose a simple UI to a user.

For this next stage of the project, an MVP would simply be an SDK that created a procedure for two tags, and a ‘Go’ button.

MVP2:

  • Multiple tags, with possibly multiple tags per asset, and a ‘Go’ button.

MVP3:

  • Now with a more complex and configurable UI for the procedural generator created by the script created using the SDK.

MVP4:

  • A procedural generator marketplace exposed within the game.

Further thoughts regarding Unity project

November 19, 2016

It turns out that there are two key ideas that should prove useful in my investigations: text based scene files, and the asset database in Unity 5.4+.  Every time that one modifies something, one would like to take note if it was a scene change (an asset addition or reposition), or actual modification of an asset.  Since terrain is an asset, this is an important thing to take note of.

Consequently, when building the architecture, I am roughly interested in the following.  One, a set of Photon Bolt Clients talking to a Photon Zeus Server running on Amazon EC2; secondly, each client to either be tagged as a ‘Dungeon Master’ or as a ‘Player’.

Then, for a Player, the experience is essentially a multiplayer game with signals synchronised with other clients in the ‘Active World View’.

For a Dungeon Master, they have the Active World View, but also a World Modification View on another tab.  Both of these as a source of truth have the game state, which is stored as metadata (text based scene files) and binary files for relevant assets (uploaded by the database API) sitting on the Zeus server.

So the Zeus server in EC2 has the game state.  Every <interval>, say five seconds, this is broadcast to every client (pushed).  Each client then has a new game state.  This state is then loaded into the Active World View (the game).

For the Dungeon Master, they have the option to modify things in the World Modification View.  Changes here update the scene in their viewpoint (the dm game state) which is then pushed by a polling mechanism to the Amazon EC2 server, for distribution as the new game state to each client (including the Dungeon Master’s Active World View).

Say as a scenario then that the DM adds an asset.  This is then reflected as a change to the text based scene file, which is pushed up to Amazon as a fairly small payload, then pushed down to all the clients again.

Alternatively, suppose that the DM edits an asset (eg the terrain).  This is then reflected in a change to that asset in game, which then needs to be transmitted as binary data to the server (a slightly larger payload), which is then pushed down to all clients again.

Persistence

It should be possible to persist a map ideally, as then things would not need to otherwise be reworked.  To do this, the DM could have a local database store for their World Modification View state, which could store the scene text file for the state when they click ‘save’, and the binary files for any assets that have been created and/or modified while in situ.  These should evidently be tagged in some easy to find way (either by player naming or timestamping).

Multiple DMs

If multiple players have edit privileges, evidently things are also a bit more complicated.  The server truely does have to own to the game state, and one needs to be able to handle race conditions / queues in terms of work being done to alter the game state.

In this model, the server broadcasts the game state to each DM client.  Then the Active World View naturally reflects this new game state.  For the World Modification View, it should theoretically be possible to update this too, but the view should not change while a DM/player is actively altering something (when they click ‘edit’).  When the player clicks ‘commit’, the changes then should be pushed to the server, and any local changes pushed by the server should then be loaded in as a pending queue of alterations.

Scenario: Say that P1 adds a tree and a house.  Then P2 add a horse and a goat.  Meanwhile P3 is altering the terrain.  P1 commits first, then P2 does.  P1’s data reaches the server after P2’s data, but the server notes that P1 made a request first and queues the requests accordingly.  The server pushes to P1, P2, and P3.  P1 and P2 then see the changes in their World Modification View.  However P3 sees no change.  P3 commits, finally happy with the terrain change.  P3 then sees P1’s change.  A moment later, they see P2’s change.  Of course, meanwhile P3’s change has been pushed to P1 and P2.

(This model is best viewed in terms of ‘git checkout -b P1/tree_house’, ‘git add tree’, ‘git add house’, ‘git commit -m ‘I added a tree and a house”, ‘git push origin P1/tree_house’, ‘git pull’).  ie analogous to source control – in fact, it is the same sort of philosophy.  In that way, multiple ‘branches’ could be spun off, before merging back into ‘master’.  But of course, that leads to:

Merge Conflicts

Say two DMs have alterations that conflict with one another.  Which gets applied?  In this case, I take the philosophy that ‘the last is always right’.  So the last becomes the new master.

Unity Game Project – some thoughts

July 17, 2016

A few months ago I revisited my Unity Game project, and re-examined the feasibility of having the ability for a dungeon master to modify a level in real time while players were running around.  The idea was for it to be quite minimal, in terms of the fact that there would be no combat mechanics, unnecessary animations, or fancy character progression / skill development / special moves etc.  Just a world in which people could wander around, with static NPCs / monsters and objects like houses and trees.

Denis Lapiner’s multiplatform level editor looked promising to me a while ago, so I looked into the mechanics of it at the aforesaid juncture.  The way it actually works, it turns out, is to create a terrain object, serialise it, then save it as a blob to local memory, which is then reloaded as a second scene.  Hence my immediate hopes of doing everything in one scene were readily dashed.

However I’ve thought a bit more about this problem more recently, and I have renewed hope that something interesting might still be possible.  I think, with some skill and cunning, it might be possible to modify the level editor scene so that two or three times a minute, a snapshot of the terrain is taken, and then the diff is applied to regenerate a preview scene – wherein also the players and everything else is pushed (as appropriate network objects) – also viewable by the dungeon master.  So, in essence, the DM would see two copies of the level (two subscreens on their screen) – one, being the level they are editing, and the second, being a non-interactive snapshot of the scene (an ‘autosaved’ version, if you will), where they can camera pan around, or even avatar into, but not interact with via the level editor.  The second screen would be updated several times a minute (or, if one had a truly powerful device – maybe not now, but at some point in the future – several times a second, so one could have a refresh framerate).  One could also imagine the DM being able to ‘minimise’ the editor portion so that they could focus on DM’ing in the ‘player’ or ‘viewer’ subscreen.

The viewer subscreen need not be totally toothless, however, as prefabs might be instantiationable within the level using the standard unity mechanics.

The players themselves would see a separate scene that just shows the current snapshot of the DM level, like the ‘viewer’ subscreen for the DM, but this would be their entire world.  They might also be able to instantiate prefabs as well, but they would not be able to sculpt or modify terrain like the DM.

In terms of the desired experience for a player, they should, one every 10 to 20 seconds or so, see the level shift around them to reflect the latest changes that the dungeon master has made.  Something that would need to be caught, however, would be the idea of where the ground is.  The player z coordinate, therefore, would need to be constantly remapped as relative to the difference in their height above the terrain at the (x, y) position they are located at.  Players would also need to be able to reposition their avatar via a third person perspective if they get stuck in a wall or an object that has suddenly appeared, too.

The interface that I plan to use to manage the networking will still be Photon Bolt, with Zeus as a server running somewhere on amazon ec2.

So that actually starts to seem like it would be vaguely doable.  And maybe, at some point in the future, with breakthroughs in personal computing, one might be able to ramp up the frame refresh rate, as well as maybe enable the experience for VR.  Possibly game engine improvements might also enable one to increase the scene refresh rate as well.