Archive for the ‘Uncategorized’ Category

Assembling the pieces

September 15, 2019

I’ve been busy learning a bit about how to write games in Godot recently, in preparation for my D&D Sandbox game.

In particular, I’ve been following Godot Getaway (kickstarter here: in terms of learning the basics of networking and procedural generation with GridMaps.

I’ve also kickstarted this course but don’t have access to it as yet, and I’ve kickstarted this course, but haven’t started it yet due to lack of energy.

For more advanced networking, I’ve been looking into this video series wherein one can join and leave asynchronously (and which wherein a small generalisation should allow for promotion of a client to the new host if the host leaves). Also in same there is the knowledge provided to host a rails server in the cloud, which is something that I also want to do to get running.

For actually point and paint in Godot during runtime, I believe I have enough to get started from this code: , albeit I will need to generalise it to GridMaps.

However, ideally I’d like to paint randomised procedurally generated content on a basic canvas during runtime, to save effort while playing the game – and eventually do more complicated things like painting in cities, or forests, or mountains. For that, I’m starting looking into things like Perlin Noise, and Worley Noise.


Running httpbin on Azure

August 24, 2019

Wait but why? Rationale

The purpose of this exercise is to deploy an adhoc server for testing purposes. Advantages of this are:

  • Configurability of testing service (we can extend it however we like)
  • Reliability / control over testing service

Create Azure VM

First, create an Azure VM with Ubuntu 18.04, making sure you provide ssh access.

In particular:

  • Select B1s. This gives you 1GB of ram and 1CPU, which should be sufficient
  • Make sure you provide yourself with ssh access by providing your ~/.ssh/ to the machine
  • Expose ports 22, 443 and 80
  • Make sure you disable automatic shutdown. If you don’t your IP will cycle, which will foul DNS resolution.

Select and configure a Freenom domain

Get a freenom DNS and point it to your IP. (I created

Install docker and build a custom image

Then ssh into your box

sudo apt-get update
sudo apt install -y

Note: if we merely wanted to serve on http rather than https, we could be done at this point by simply running “sudo docker pull kennethreitz/httpbin & sudo docker -d -p 80:80 kennethreitz/httpbin”, and not need to build a custom image. But we want to deploy a testing server, so we need https as well. As is always the case with these things, it turns out 80-90% of the effort is in adding a small additional (but important) piece of functionality.

Clone kennethreitz/httpbin, alter the Dockerfile to expose 54 and run on 54. We need to do this because certbot requires port 80.

You will also need a docker hub id, and to log into your docker hub account before doing this step.

docker build -t your-docker-hub-id/httpbin .
docker push your-docker-hub-id/httpbin

Then run your custom image on your server,

docker pull your-docker-hub-id/httpbin
docker run -d -p 80:54 your-docker-hub-id/httpbin

Check that things work by going to your VM ip and seeing that the application loads. Then

docker ps
docker stop <container_id>
docker rm <container_id>
docker run -d -p 42:54 your-docker-hub-id/httpbin 
# port 42 of host goes to port 54 of container

Install Apache

Install apache and start it up.

sudo apt install -y apache2

Set up a domain on apache (use the freenom domain you set up before). Mine is You should replace this with your own domain.

sudo mkdir /var/www/
sudo chown -R $USER:$USER /var/www/
sudo chmod -R 755 /var/www/
nano /var/www/
        <title&gt;Welcome to!</title&gt;
        <h1&gt;Success!  The virtual host is working!</h1&gt;
sudo nano /etc/apache2/sites-available/
<VirtualHost *:80&gt;
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
sudo a2ensite
sudo a2dissite 000-default.conf
sudo apache2ctl configtest
sudo systemctl restart apache2

Set up certbot

Install certbot to secure your domain name. Certbot allows us to automatically refresh our site certificate with LetsEncrypt, so that we are always secured.

sudo add-apt-repository ppa:certbot/certbot
sudo apt install -y python-certbot-apache
sudo certbot --apache -d -d

Select 1 ‘no-redirect’.

Do a dry-run of certbot’s auto-renew functionality.

sudo certbot renew --dry-run

Serve the application on http

Now, we’d like to serve our docker application.

First, make sure that port 42 is open on your Azure VM. This is very important, since otherwise the next step won’t work.

In your apache2.conf, set up forwarding to the correct port

sudo vim /etc/apache2/sites-available/
LoadModule proxy_module modules/
LoadModule proxy_http_module modules/
<VirtualHost *:80>
    ProxyPreserveHost On
    ProxyRequests Off
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
    ProxyPass /app http://localhost:42/
    ProxyPassReverse /app http://localhost:42/


a2enmod proxy_http

Now, navigate to your.domain/app, and you should see the application, albeit with no css or a few other things.

Attempt to serve the application on https

Ok, almost there! Now we want to

  • modify the application docker image again, to fix the css errors (since we are now serving on /app, rather than the base domain)
  • allow ourselves to redirect on https, so that https://your.domain/app works, too.
  • Create two new logging files, ${APACHE_LOG_DIR}/443_error.log and ${APACHE_LOG_DIR}/443_access.log
sudo touch /var/log/apache2/443_error.log
sudo touch /var/log/apache2/443_access.log

Enable things we will need for ssl

sudo a2enmod rewrite
sudo a2enmod ssl

Update our apache.conf file

sudo vim /etc/apache2/sites-available/
LoadModule proxy_module modules/
LoadModule proxy_http_module modules/

<VirtualHost *:80>
    # ProxyPreserveHost On

    # server config
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/

    # proxy
    ProxyRequests Off
    ProxyPass /app http://localhost:42/
    ProxyPassReverse /app http://localhost:42/

    # logs
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined

<VirtualHost *:443>
    # ssl
    SSLEngine on
    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/

    # server config
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/

    # proxy
    ProxyRequests Off
    ProxyPass /app http://localhost:42/
    ProxyPassReverse /app http://localhost:42/

    # logs
    LogLevel debug
    ErrorLog ${APACHE_LOG_DIR}/443_error.log
    CustomLog ${APACHE_LOG_DIR}/443_access.log combined
sudo chown -R $APACHE_RUN_USER:$APACHE_RUN_GROUP /etc/letsencrypt/live/
sudo apache2ctl configtest
sudo systemctl restart apache2

This is enough to serve on with http, but not securely at with https – you will get 404 not found. The solution is to use nginx as a reverse proxy for apache .

Set up Nginx as a Reverse proxy for apache

First, install nginx

sudo apt install -y nginx

* install nginx (sudo systemctl start ufw)
* then secure with let-s encrypt and renew automatically via cron job

Finally, DNS! Maybe will update later. Still haven’t been able to properly progress beyond http only (no certificate) and running httpbin on the basic ip. Would need to use FreeNom to get a domain to use and have A records and point to the correct nameservers on said service.

Water in arid places: Reticular Chemistry, MOFs, COFs, weaving and pathways to practical nanotechnology

August 10, 2019


I recently noticed that New Scientist made mention of a particular application of some particularly interesting research directed by Omar Yaghi, an American-Jordanian chemist at UC Berkeley. This work is notable for a number of reasons. Perhaps the best introduction to it is this video, here, which was recorded at Stockholm University May 9-10, 2019 earlier this year:

Notes from the talk

In the video, a few notable things are mentioned:

  • Up until the 1990s – indeed, 1993:
    • polymer chemistry (1d chains) was state of the art
    • 0d (organic chemistry) was well understood
    • 2d, 3d chemistry was not well understood
  • The basic game of chemistry is to create compounds that aid synthesis of other compounds
    • In order to have reproducible results you need to crystallise your catalytic compounds
    • But the crystallisation problem was not solved
    • A key problem with crystallisation is understanding how to deal with compounds with bonds that are stronger than Van de Waals forces, Hydrogen bonds, and M-donor bonds in coordination networks

In the late 1990s, Omar made progress with reticular chemistry with MOF (metal organic frameworks) that solved the crystallisation problem. These used M-charged linker bonds.

Later still, in the 2000-2010 decade, Omar made progress with Covalent organic framework using covalent bonds.

Together, these two forms of stronger bonds form the basis of what he calls Reticular Chemistry.

Generalisations of COFs include molecular weaving (~2016). Current research involves using a mixture of heterogeneity in crystalline backbone (rather like DNA) to mimic proteins (i.e. organic molecular machines) in a much wider variety of reaction conditions. i.e., nanotechnology.

But the rest of his talk didn’t really focus on his work from 2000+; rather, it focused on applications of the pure research on MOFs dating from the 90s and its gearing for practical applications (a company based on these applications, Water Harvesting Inc. ( is due to launch in October this year).

Basically his point was that with MOFs you can build a massive massive amount of combinatorial possibilities of catalytic compounds that can do various things, depending on:

  • the choice of metal
  • the choice of organic molecule
  • the choice of geometry

These form a ‘periodic table’ of reticular chemistry of tremendous combinatorial complexity. “[If you can imagine it, you can build it]” paraphrasing Omar in his talk.

Possibilities of applications of such combinatorial choices include:

  • hydrogen storage (for hydrogen powered fuel cells)
  • methane storage (up to 3 times that of just storing the methane without the MOF)
  • carbon dioxide sequestration and conversion into methanol fuel (work in progress, currently in the lab, but with obvious applications to combating global warming while producing a useful product to boot)
  • water generation in arid air

He spoke a bit about water generation. There are apparently 6 septillion litres of water in the air at any one time – as much as is present in freshwater lakes and streams. Apparently a lot of current desalination technologies rely on high humidity (~65%) and also for the air to be cooled to ~1.5 degrees celsius for the Carnot cycle / Dew point to be reached. So 1) very energy inefficient and 2) not typical of desert conditions, where humidity might be between 5% and 25-30% at most during the night.

His team ultimately created a device that served as the desiccant, i.e. with the MOF as the desiccant. It required no power – just sunlight. And it generated (first prototype with zirconium) about 0.2L of water per day per kg of MOF. The second prototype (since zirconium is about $150 USD / kg) used aluminium, and produced about 1L of water per kg per day.

Bottom line

Water Harvesting Inc. is launching passive water extractors from low humidity air in October; they (wahainc) are looking to commercialise this technology, and release very soon. A tremendous success story and significant for all sorts of reasons; if nothing else because:

  • This rudimentary application of basic nanotechnology has the potential to solve water problems in water stressed regions
  • This technique promises fresh water where potable water is otherwise not present
  • This technique also provides a potential route to carbon dioxide sequestration and mitigation of global warming

More excitingly, the future theoretical direction of this research seems like a nice natural pathway to look into designing customisable and specialised molecular machines for performing very specific tasks. Not programmable / adaptive machines by any means – that would be assembler level control – but one could certainly potentially imagine the ability to program heterogeneity into a crystalline MOF or COF to perform a specialised operation at one further level of abstraction, so that one would be in essence building reaction vessels at runtime in order to build something more complicated at molecular level. That would be an assembler, or a pathway to creating assembler prototypes.


General update

July 12, 2019

I’ve been making a bit of progress with my paper on algebraic information theory – and I’ve been continuing to also nurse an interest in building a sandbox RPG type game in Godot.

In regards to the paper, I’ve made a fair bit of progress. Most conceptual challenges have been solved. I’ve also moved towards some better project management practices with said research project, including using source control (finally), splitting my tex file into subfiles and actually having a sensible project structure (a nice-to-have on top of what I’ve done might be to create some form of Makefile to compile the project, although that is made a bit more complicated by the fact that I am using latex-mk and a few other non-standard latex compilation tools). What remains is:

  • polishing arguments
  • omnigraffling diagrams
  • tidying up references with bibdesk

With respect to the Godot project, I’ve recently backed a Kickstarter for a multiplayer car game course in Godot that hopefully should be available by Christmas – and certainly, by the end of July I should have access to the current prototype game associated to said course. I’ve also started backing this fellow on Patreon with respect to helping encourage them to continue developing in game level editing tutorials for the Godot engine. I’ve already managed to get my hands on some working github code by the aforementioned fellow, though, so I’ve got a reasonable / decent starting point.

With that knowledge to hand I should be able to start making more appreciable progress at least in prototyping a minimal viable product in Godot to proceed on the basis of.

In regards to same, at the risk of repeating myself, I’ve settled on the following functionality that I’d like to implement for a ‘phase 1 release’:

  • A network scene
  • A play scene (play mode) inheriting from network scene
  • An editor scene (edit mode/overlay) inheriting from the play scene
  • A start game as screen with ‘as server’ and ‘as client /w server ip’
  • Edit overlay is default for client and server
  • Edit can place walls and edit position of a character
  • Can (client or server) use WASD to move character around
  • Wall edits & character moves are synchronised b/w server and client

Certain parts of the above I can borrow from this repository – in particular the edit overlay, the editing within the runtime of the game, WASD controls etc. With respect to the networking functionality, I’m hoping to extract that from the prototype provided for the multiplayer car game course.

Other things I’ve been fooling around with:

  • I’ve looked at FireStore (the rebrand of DataStore) running in (legacy) DataStore mode on google app engine through this quickstart , mainly motivated by the fact that my current personal website is essentially just a collection of static files, and I’d like to implement something a bit more modern. I disabled billing on my test app after running it for a couple of days as the charge was $3.50 USD / day. My opinion is that this is worth exploring a bit further, but only if I can find some way to ship the app that doesn’t use the app engine flexible environment, which was essentially where all my cost was coming from. Another thing I’d like to look into would be to run a FireStore app in native mode. The attractive thing about FireStore is that the pricing (provided one isn’t using another google service like Flex in conjunction with it) is quite friendly – the free tier in particular is useful for experimentation.
  • I’m also currently investigating setting up my own MediaWiki server. Interesting so far.
  • I’ve been nursing an ambition to create a chat bot app and expose it somewhere on the web; learning a bit about applied machine learning and how to build apps using such techniques seems like a potentially useful thing to learn about doing, particularly if I want to move into the area at some point in the future. Still random thoughts / vapour at the moment though.
  • I managed to get a Siremis / Kamailio VOIP/SIP server prototype sort of up and running and chatted between my computer and my mobile phone – and got video sort of working, too. The connection was very unstable though. I’m toying with the idea of tearing down the VM I’m currently using, and building from scratch a new VOIP server using mumble (, and using plumble ( for testing on my android phone.

Finally, I’ve started a Patreon page of my own here, mainly as an experiment, in order to see if this platform is a viable way to help position myself towards working on things of a more open-ended / research-y nature for a living, rather than merely as a hobby.

Topological Quantum Computing Articles

May 5, 2019

Through browsing a recent edition of Nature, I recently came across the following articles on topological quantum computing, or which at least touched on aspects of condensed matter physics relevant to said research direction.

These are the papers:

and one bonus article from more recently:

The bottom line of this work is that it appears to me that researchers at Microsoft, and various other institutions, are closing in on building a working topological qubit that would lend itself to scaling up to a practical quantum computer.

What is a topological quantum computer? A good question. This jargon essentially means, ‘use a Majorana fermion (a particle that is its own antiparticle) to do something weird with anyons in order to achieve highly stable qubits through topological properties that are hard to deform, then couple these together to make a computer’.

More on majorana fermions can be found here: . The topological ‘hardness’ of these exotic things that arise within certain condensed matter systems essentially rests on something-something braid groups, per a talk I remember Michael Freedman giving at a conference I attended in New Zealand about 13 years ago (January 2006) in Taipa, New Zealand. That was actually a very interesting conference, with a number of luminaries, including John Conway who spoke about his Game of Life. Anyway, enough name dropping; my point was that Michael Freedman spoke briefly about braid groups there as part of his work for a thinktank at Microsoft, “Microsoft Station Q”.

Another good introduction to topological quantum computing can be found here: .

What is a non-Abelian anyon? First of all, an abelian anyon is a quasiparticle that obeys statistics between Fermi-Dirac and Bose-Einstein. State is preserved with Abelian Anyons.

A non-abelian Anyon is a type quasiparticle that does not necessarily preserve state. In the landmark 1988 paper by Jürg Fröhlich, “Statistics of fields, the Yang-Baxter equation, and the theory of knots and links.” Nonperturbative quantum field theory. Springer US, 1988. 71-100., Fröhlich described the properties of such potential quasiparticles (paywall).

In which is in fact a rather excellent introduction to this subject, the usage of non-abelian Anyons in this mix is made clearer. Indeed, on page 3:

There are three fundamental steps in performing a topological quantum computation, illustrated in Fig. 1.

1. Creating qubits from non-Abelian anyons.

2. Moving the anyons around—‘braiding’ them—to perform a computation.

3. Measuring the state of the anyons by fusion

Physical realisation of this system is where things are currently in a state of flux. Indeed, certain types of non-Abelian anyons can be also what are known as Majorana fermions.

A Majorana fermion is a neutral spin-​12 particle that can be described by a real wave equation (the Majorana equation (1937)). A property of solutions to this wave equation is that they happen to be their own antiparticle.

As to the papers above, it will take some digging to really get to the heart of where things are in this field at the moment, but it looks totally fascinating.

In , the researchers claim that they were able to produce chiral majorana fermions, a type of non-abelian Anyon, in a “hybrid device of [a] quantum anomalous Hall insulator and a conventional superconductor“.

In, the researchers were able to exhibit “Topological superconductivity in a phase-controlled Josephson junction” (paper title). From the abstract:

Topological superconductors can support localized Majorana states at their boundaries. These quasi-particle excitations obey non-Abelian statistics that can be used to encode and manipulate quantum information in a topologically protected manner.

So they essentially claim that they were able to implement one of the building blocks of the proposed system in through control of a type of Josephson junction. Interesting. They go on:

While signatures of Majorana bound states have been observed in one-dimensional systems, there is an ongoing effort to find alternative platforms that do not require fine-tuning of parameters and can be easily scalable to large numbers of states. Here we present a novel experimental approach towards a two-dimensional architecture. Using a Josephson junction made of HgTe quantum well coupled to thin-film aluminum, we are able to tune between a trivial and a topological super-conducting state by controlling the phase difference φ across the junction and applying an in-plane magnetic field.

In, the researchers seem to have built on the previous result and were able to exhibit “Evidence of topological superconductivity in planar Josephson junctions”. From the abstract (my emphasis):

Majorana zero modes are quasiparticle states localized at the boundaries of topological superconductors that are expected to be ideal building blocks for fault-tolerant quantum computing. Several observations of zero-bias conductance peaks measured in tunneling spectroscopy above a critical magnetic field have been reported as experimental indications of Majorana zero modes in superconductor/semiconductor nanowires. On the other hand, two dimensional systems offer the alternative approach to confine Majorana channels within planar Josephson junctions, in which the phase difference {\phi} between the superconducting leads represents an additional tuning knob predicted to drive the system into the topological phase at lower magnetic fields. Here, we report the observation of phase-dependent zero-bias conductance peaks measured by tunneling spectroscopy at the end of Josephson junctions realized on a InAs/Al heterostructure. Biasing the junction to {\phi} ~ {\pi} significantly reduces the critical field at which the zero-bias peak appears, with respect to {\phi} = 0. The phase and magnetic field dependence of the zero-energy states is consistent with a model of Majorana zero modes in finite-size Josephson junctions. Besides providing experimental evidence of phase-tuned topological superconductivity, our devices are compatible with superconducting quantum electrodynamics architectures and scalable to complex geometries needed for topological quantum computing.

So it looks like back in September 2018 folks were closing in on what is required for fault tolerant quantum computing.

Finally, in the April 2019 paper (recent! how intriguing…), the authors describe “Tuning Topological Superconductivity in Phase-Controlled Josephson Junctions with Rashba and Dresselhaus Spin-Orbit Coupling”. From the abstract:

Recently, topological superconductors based on Josephson junctions in two-dimensional electron gases with strong Rashba spin-orbit coupling have been proposed as attractive alternatives to wire-based setups. Here, we elucidate how phase-controlled Josephson junctions based on quantum wells with [001] growth direction and an arbitrary combination of Rashba and Dresselhaus spin-orbit coupling can also host Majorana bound states for a wide range of parameters as long as the magnetic field is oriented appropriately. Hence, Majorana bound states based on Josephson junctions can appear in a wide class of two-dimensional electron gases.

So it seems that there has been an alternative architecture proposed. The people involved also do not seem to be Microsoft related, but a different collaboration. Where was this alternative architecture originally proposed? Unclear, nonetheless, searching for references to “Rashba”, we are led to this part of the paper:

[Regarding implementing architectures to manipulate a 2D electron gas], among these proposals, those based on phase-controlled Josephson junctions with Rashba SOC [see Fig. 1(a)] offer an attractive alternative.37,38,42–47. Here, the interplay be-tween an in-plane Zeeman field parallel to the super-conductor/normal (S/N) interfaces, Rashba SOC, and the Andreev bound states formed in the normal re-gion induces topological superconductivity with Majorana bound states at the ends of the junction .

So it appears nothing fundamentally new here, just consolidation in the field as other teams replicate results.

Interesting area to watch, but the key takeaway for me is from

Besides providing experimental evidence of phase-tuned topological superconductivity, our devices are compatible with superconducting quantum electrodynamics architectures and scalable to complex geometries needed for topological quantum computing.

Together with advances from the new field of twistronics potentially unlocking the potential for room temperature superconductivity, this is looking tremendously interesting indeed.

A few interesting recent developments

April 20, 2019


I touch briefly on the following topics in turn:

  • [Agriculture] Mechanised Agriculture (and robotics)
  • [Energy] Solar energy from deserts
  • [Agriculture] Cellular agriculture
  • [Computing] Tensorflow probability
  • [Energy] The third wave of renewable deployment
  • [Personal Projects] My current research in mathematics
  • [Computing] Thoughts on Godot
  • [Energy] Fusion power
  • [Rocketry] Reaction Engines Limited
  • [Personal Projects] Things I’d like to learn more about
  • [Personal Projects] Forays into running my own web services
  • [Computing] Topological quantum computing
  • [Water] Status of work on desalination

Mechanised Agriculture

Swarmfarm robotics (the company I’m following in this space)

The opportunity

  • Automation of mechanised agriculture by artificial intelligence and robotics has the potential to allow a number of benefits, including: massively scaling farmsteads to larger regions, being able to farm previously non-arable land, optimised crop rotation, better soil maintenance and care, micro-control of weeding, spraying, reduced list of pesticides, micro-management of pruning and harvesting.
  • Productivity gains for farmland is a certainty. Increasing the amount of arable land is also a certainty.
  • Also interesting is whether this sort of farming could be adapted for vertical farming practices as well.

Exporting solar generated energy from large desert regions to regions with less solar production

The opportunity

  • Places like the Sahara and the centre of Australia have low population but have significant amounts of solar energy falling on them.
  • Building solar power plants in these places and then building high voltage power lines would be a way to boost the economy of the region by shipping electricity to places of high population density but with lower amounts of solar energy falling on them.

Australia to Indonesia

Sahara to Europe

Meat grown in the lab

250,000 to 12 dollars for a burger in 10 years

Other implications

  • Decrease of need for land to be used for raising cattle.
  • Reduced methane emissions.
  • Ability to release land for national parks or alternatively use it for other forms of agriculture.
  • Being able to grow other types of meat like fish, which has implications for preservation of fisheries and maintenance of biodiversity in the oceans.

Tensorflow Probability

Why this is cool

  • Probabilistic programming for machine learning.
  • Builds on top of tensorflow.
  • Can run in colaboratory.
  • Supports a wide variety of different ways of testing the efficacy of a machine model at runtime.

How this could be used in production systems

  • Could determine whether or not a prediction is reliable, or whether the meta-model in a reinforcement learning problem (eg a self driving car, or a chat bot cough, Duplex, cough) should stop and seek for more information.

Implications for testing machine learning models

  • I see the main application in deep reinforcement learning, but there are other opportunities for carefully evaluating the efficacy and/or suitability of different models against a particular duo of train and test datasets.
  • One could potentially start building pipelines to ‘test’ models against particular instances at runtime, particularly when a model might have become stale.

Other implications for data science

  • As mentioned before, I’d be very interested to see how this could be used in reinforcement learning.

The ‘third wave’ of renewables

Future crunch synopsis

“the only limitation is ‘how fast can we deploy?'”

  • At this point, replacement of fossil fuel generation from an economic perspective becomes a bit of a no-brainer.
  • It is estimated that by 2030 there will be nowhere in the world that this no-brainer will not apply.
  • Things may eventually start to plateau once renewables reach 40-60% of base generation due to seasonal or diurnal cycles of power availability. However batteries are improving and becoming cheaper, too, and also there is the potential for shipping power internationally (eg from the Sahara to Europe, and from Australia to Indonesia).

Current research, mathematics

Type theory as a space of functions

  • I have been exploring policy theory, which is most naturally thought of as endowing the space of classes of functions that take types and operate on geometry, to itself have a geometrical structure. This is a natural way potentially of unifying ideas about topology, geometry, and algebra, and extending them to the foundations of torus categories.

Rather than data as geometry, code as geometry

  • Wheeler talked about ‘It from Bit’ in terms of coining the interplay between information theory and physics, so that one could think of ‘data’ as being analogous to physical space (subject of course to appropriate conventions, definitions, and guardrails against reification fallacies of varifold types and flavours).
  • However the exciting thing is that one could think of the programs that are acting on Bits to be somehow dual to theories acting on physical spaces. So some quite exotic potential directions to explore.

Thoughts on Godot

Status of the Godot project

The sad truth is that, even after two years of being open source… NO ONE KNOWS ABOUT GODOT!
Compared to last year, some more people had heard of Godot, but it’s still very largely unknown. We seriously need to consider ways to improve on this.
I would probably say most of the industry still has not heard about it, but there is a very significant chunk that did, though.
2019 (this GDC)
From everyone we talked to, it seems at this point a majority of the industry has heard of Godot. Most of those we casually talked to definitely had heard of it and many expressed interest in developing games with it in the future

Thoughts regarding cool things that could be introduced

  • My hobbyhorse continues to be a procedurally generated D&D multiplayer sandbox and steps towards that.
  • For this, being able to use an object picker in game to drag and drop things would be quite beneficial. I was quite excited by the possibility to vote on a tutorial for same in the recent Patreon voting for this month.
  • This issue would be useful to explore: regarding Voronoi tiling and Lloyd’s relaxation. This link in a comment was particularly interesting as it revealed a wealth of resources across the web on procedural generation:

The need for testing and ensuring that technical debt doesn’t accumulate

  • Technical debt always tends to accumulate, and as Godot continues to become more and more popular and crescendo past critical mass, there will need to be a continued effort to fight technical debt and prioritise this as roadmap items for the project.
  • More unit tests, end to end tests, and other testing would be a good idea. The potential to run the Godot editor in a browser as a consequence of work being funded by the Mozilla foundation would potentially be a good opportunity for end to end testing or other forms of integration testing.

The need to maintain minimalism in the UI and prevent code and feature bloat while maximising discoverability, documentation and flexibility / performance

  • The reasons for Godot’s emergence as a compelling and competitive challenger to many established game engines are varied. One is that it is open source, but there are other value propositions that will need to be intentionally protected as the project continues to mature if it is, in my view, to continue to be a compelling engine.
  • Performance, minimalism of UI and design, discoverability, and good documentation are key draws that pull people to the Godot project, and I think that these are things that should be protected.

Fusion Power

z-pinch research group at university of washington

Wendelstein 7-x

  • I believe that fusion product will become one step away from production ready in current experiments in 2019, and that Wendelstein 7-x will have reached its optimisation objectives either this year, or in 2020.
  • Next steps include proposed next generation power plant HELIAS and/or working more closely to introduce Wendelstein findings into the ITER construction

Iter consortium

  • Inauguration of machine about 2025?

What’s next?

  • look to ITER
  • look to HELIAS and next generation plans at Max Planck
  • z pinch experiments at the University of Washington worth watching
  • probably on track for first commercial fusion power plants conservatively by 2040 or 2045

Reaction Engines Limited

TF1 & TF2 progress

  • TF1 construction and final fitout almost completed per my understanding
  • Recent test in TF2 in Colorado passing (Mach 3 air temperatures / Blackbird speed)
  • Recently announced collaboration with the National Composites Centre in the UK

What’s on the horizon?

  • I’d imagine a hotter temperature test in Colorado and/or at TF1
  • Tests at TF1 starting in late 2019 or early 2020
  • In 2020 things will get interesting

Things that I’d like to learn more about


  • Rails udemy course


Forays into running my own web services

email server

  • interesting to investigate setting up an email server to reduce total reliance on two or three dominant cloud services
  • advanced considerations including using sendgrid smtp relay service

rss server with twitter feed converter server subsidiary

  • Good to customise one’s own news feeds to ensure that one doesn’t drown in irrelevant and/or ‘clickbait’ information
  • Good to have an rss server to offload client side rss polling to a server in the cloud instead.

voip / sip server & pstn investigations

  • interesting to learn how to set things up with kamailio
  • newer more developer friendly / modern architectures also starting to emerge like
  • flowroute or twilio as pstn integration services
  • scaling to a business and how much one can charge, including the potential for profitability
  • backing up data, failover, and ensuring production ready for running a voip server business

Topological Quantum Computing

Status of work on desalination

The use of large-scale desalination plants is posing an increasing threat to the health of the seas, a recent report from the UN University found. For every litre of freshwater created from a conventional desalination plant, an average of 1.5 litres of brine is also made.

The Guardian

Happy Easter!

Merry Christmas!

December 26, 2018

Various things that I am looking forward to doing as hobby projects next year, or at least chewing over:

  • Investigating building a procedural generation course in Godot on the Udemy platform
  • Writing a paper on pyramid / 2-simplicial categories and submitting it to a mathematics conference (maybe Category Theory 2019?)
  • Writing a paper on algebraic information theory
  • Further afield (2025/2030+ ?), thinking vaguely about lens-categories, and 1-complexity reduction techniques. Such techniques are important if one wishes to construct control circuitry to solve field equations sufficiently quickly for precision control of fairly advanced technology, such as craft capable of practical and routine flight through interstellar space.

Things that I thought were really neat that have happened this year:

  • Discovery of conventional superconductors with Tc conservatively at 250K (however, not at ambient pressure), with potential to find closely related superconductors with Tc up to 320K: . Exotic ‘high-Tc’ superconductivity still remains a mystery.
  • Reaction engines have completed TF2 in Colorado USA and are on track to open TF1 in Westcott UK for testing in 2019: . For a reminder, this company is researching technology that may reduce cost of payload to orbit from ~USD 10000 / kg to ~USD 100/kg (although I may be a bit wrong about the precise pricepoints).
  • The Max Planck institute continued to set new records with their Wendelstein 7-X experiment (which is not designed to be a prototype power plant, but a stepping stone for plasma physics research along those lines), with a fusion product produced with temperatures up to 20 million Kelvin, and “high plasma densities of up to 2 x 10^20 particles per cubic meter – values that are sufficient for a future power station.” (Temperatures of 100 million Kelvin are needed for a power plant (to achieve ignition of the plasma), as well as continuous operation.) Further upgrades are to be made to the device over the next few years, by installing cooled carbon tiling (I think), allowing the stellarator to potentially achieve continuous operation (and certainly pulses up to 30 minutes). Current pulse duration achieved has been up to 100 seconds. In short, multiple records have been set by the Wendelstein 7-X team over the last year, and things are looking very promising:

Various software projects that I am excited about include:

(Actually, I think that is the main software project that excites me at the moment, outside of work related ones)

In terms of yet other projects:

Merry Christmas! Here’s looking forward to many exciting, interesting and useful things happening in 2019!

Various odds and ends

November 11, 2018

I recently discovered that there is an open source game engine called Godot. Having now listened to more than 40 lectures on how to use the engine by, I can confirm that it is quite a useful piece of technology.

Also, I recently came across OpenAi’s ‘Spinning Up’ page from a blog post of theirs, here: , in terms of getting started with the discipline of Deep Reinforcement Learning, a combination of Deep learning with reinforcement learning. They use MuJuCo for this, which requires a $500 USD license (so probably not justifiable for random experimentation), however OpenAi also have an open source variant of the same sort of program here: that, although probably not nearly as good, has the advantage of being opensource and free. They are apparently currently working on ‘roboschool 2’, which will represent a significant step forward on this project.

The FutureCrunch folks have posted three blog entries on the transition to clean energy globally, which represents four months of intensive research by said folks. There are approximately 2 hours worth of reading summed up in three blog posts:


FutureCrunch also informed me of the existence of this movie trailer: based on a book of short stories ‘The Wandering Earth’ by Cixin Liu.

Deploying a phoenix app to production, II

June 30, 2018

Consequent to my earlier investigations, I found that edeliver failed to provide what I needed to run a phoenix app in production – although I was able to have a brute force workflow to deploy something to a digital ocean server.

That too, was limited however, in that I was unaware as to how to run my app as a background process (daemonised).

Fast forward a bit, and I found an alternative deployment tool called bootleg:

I hence followed the following steps:

1. Added these dependencies to my mix.exs
{:distillery, "~> 1.5", runtime: false},
{:bootleg, "~> 0.7", runtime: false},
{:bootleg_phoenix, "~> 0.2", runtime: false}

2. Ran mix deps.get
3. Ran mix release.init
4. Modified .gitignore so that I was able to commit prod.secret.exs to my repo. Used environment variables instead
5. Modified deploy.exs:
role :build, "server_ip", user: 'root', identity: "~/.ssh/id_rsa", workspace: "/app_build"
role :app, "server_ip", user: 'root', identity: "~/.ssh/id_rsa", workspace: "/app_release"

6. Modified deploy/production.exs:
role :app, [“”], workspace: “/root/app_release”
7. ssh’d into my server and made sure those directories existed
8. mix
9. mix bootleg.deploy
10. mix bootleg.start

This turned out to be sufficient to deploy my app.

One last thing: I needed to set a blank password for my ssh key. Hopefully bootleg will fix this in a later release.

Screen Shot 2018-06-30 at 12.00.57 PM.png

You can view said site at

Deploying a phoenix app to production

June 25, 2018

I recently followed this tutorial: to deploy a phoenix app to production. Following the steps, I found that it was helpful in that it built my confidence in the following:

* scp to transfer files or folders to a server
* ~/.ssh/config to ssh into a server on digitalocean via an alias
* I learned about what nginx actually does, in that it is a reverse proxy – and what a reverse proxy actually is
* I learned about edeliver and roughly how it works in conjunction with distillery

However, I found that I failed to progress in the tutorial beyond the point of “mix edeliver upgrade production”. edeliver was failing to work for some reason.

Eventually I just gave up and following an alternative process which I’ve documented in a messy sort of way here (with references): .

Basically, to upgrade a release:

On personal computer:

* on personal computer, make and test a change
* push to github

On github:

* merge to master
* make a new release with tag ‘tag’

On production machine:

* wget the new release on production machine
* tar -xzaf
* cd spinning_cat_’tag’
* Install dependencies with mix deps.get
* Create and migrate your database with mix ecto.create && mix ecto.migrate
* Install Node.js dependencies with cd assets && npm install
* Start Phoenix endpoint with mix phx.server

This flow, although more convoluted and less streamlined, works.

However, in the process of doing this, I discovered potentially why I was stuck in the previous tutorial. Essentially, I was missing a few dependencies in order to run things directly on the production machine:

* curl -sL | sudo -E bash –
* sudo apt-get install -y nodejs
* sudo apt install npm
* sudo apt-get install gcc g++ make
* curl -sL | sudo apt-key add –
* echo “deb stable main” | sudo tee /etc/apt/sources.list.d/yarn.list
* sudo apt-get update && sudo apt-get install yarn
* sudo apt install nodejs-legacy
* sudo apt-get install inotify-tools

However, I as have yet not tested this.

So I guess, next steps here:

* See if I can get edeliver working
* Learn properly about nginx reverse proxy
* Learn about dns records

Then, further afield:

* Purchase a domain name
* Customise website