Deploying a phoenix app to production, II

June 30, 2018

Consequent to my earlier investigations, I found that edeliver failed to provide what I needed to run a phoenix app in production – although I was able to have a brute force workflow to deploy something to a digital ocean server.

That too, was limited however, in that I was unaware as to how to run my app as a background process (daemonised).

Fast forward a bit, and I found an alternative deployment tool called bootleg: https://github.com/labzero/bootleg

I hence followed the following steps:

1. Added these dependencies to my mix.exs
{:distillery, "~> 1.5", runtime: false},
{:bootleg, "~> 0.7", runtime: false},
{:bootleg_phoenix, "~> 0.2", runtime: false}

2. Ran mix deps.get
3. Ran mix release.init
4. Modified .gitignore so that I was able to commit prod.secret.exs to my repo. Used environment variables instead
5. Modified deploy.exs:
role :build, "server_ip", user: 'root', identity: "~/.ssh/id_rsa", workspace: "/app_build"
role :app, "server_ip", user: 'root', identity: "~/.ssh/id_rsa", workspace: "/app_release"

6. Modified deploy/production.exs:
role :app, [“178.128.34.45”], workspace: “/root/app_release”
7. ssh’d into my server and made sure those directories existed
8. mix bootleg.build
9. mix bootleg.deploy
10. mix bootleg.start

This turned out to be sufficient to deploy my app.

One last thing: I needed to set a blank password for my ssh key. Hopefully bootleg will fix this in a later release.

Screen Shot 2018-06-30 at 12.00.57 PM.png

You can view said site at http://marbles.cthulu.tk:4000/.

Advertisements

Deploying a phoenix app to production

June 25, 2018

I recently followed this tutorial: https://www.digitalocean.com/community/tutorials/how-to-automate-elixir-phoenix-deployment-with-distillery-and-edeliver-on-ubuntu-16-04 to deploy a phoenix app to production. Following the steps, I found that it was helpful in that it built my confidence in the following:

* scp to transfer files or folders to a server
* ~/.ssh/config to ssh into a server on digitalocean via an alias
* I learned about what nginx actually does, in that it is a reverse proxy – and what a reverse proxy actually is
* I learned about edeliver and roughly how it works in conjunction with distillery

However, I found that I failed to progress in the tutorial beyond the point of “mix edeliver upgrade production”. edeliver was failing to work for some reason.

Eventually I just gave up and following an alternative process which I’ve documented in a messy sort of way here (with references): https://github.com/token-cjg/spinning_cat/tree/master .

Basically, to upgrade a release:

On personal computer:

* on personal computer, make and test a change
* push to github

On github:

* merge to master
* make a new release with tag ‘tag’

On production machine:

* wget the new release on production machine
* tar -xzaf
* cd spinning_cat_’tag’
* Install dependencies with mix deps.get
* Create and migrate your database with mix ecto.create && mix ecto.migrate
* Install Node.js dependencies with cd assets && npm install
* Start Phoenix endpoint with mix phx.server

This flow, although more convoluted and less streamlined, works.

However, in the process of doing this, I discovered potentially why I was stuck in the previous tutorial. Essentially, I was missing a few dependencies in order to run things directly on the production machine:

* curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash –
* sudo apt-get install -y nodejs
* sudo apt install npm
* sudo apt-get install gcc g++ make
* curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add –
* echo “deb https://dl.yarnpkg.com/debian/ stable main” | sudo tee /etc/apt/sources.list.d/yarn.list
* sudo apt-get update && sudo apt-get install yarn
* sudo apt install nodejs-legacy
* sudo apt-get install inotify-tools

However, I as have yet not tested this.

So I guess, next steps here:

* See if I can get edeliver working
* Learn properly about nginx reverse proxy
* Learn about dns records

Then, further afield:

* Purchase a domain name
* Customise website

Unity project progressions: 50% completion towards first roadmap item!

June 22, 2018

You may recall (well, likely not) that the last time that I wrote in detail (February this year was not in detail) about my Unity project was here: https://confusedgremlin.wordpress.com/2016/11/19/further-thoughts-regarding-unity-project/ (wow, 2016! has it really been that long?) and further back I wrote about roadmap items here: https://confusedgremlin.wordpress.com/2015/02/28/dungeons-and-dragons-video-and-alpha-brainstorming/ (in the dark depths of 2015 …).

In the 2015 post I mentioned:

Hence, it now becomes possible to start working towards a first version of my dungeons and dragons dungeon master style multiplayer game.  In particular, I think there are a number of things that I’d now like to do:

  • Plumb in the multiplayer functionality from my previous project.
  • Introduce a simple and straightforward database model for players and dungeon masters.
  • Allow players to spawn in a world in an appropriate way (without falling forever).
  • Allow the dungeon master to move players around.
  • Allow the dungeon master to switch their camera to a creature under their control and move it around.

I realised pretty quickly that this was an ambitious task, and could take years to progress on.  Fortunately, due to the relatively recent acquisition of some code masterfully written by another developer, and due to the continued reworking of the Bolt multiplayer framework by Photon, I have finally made a small breakthrough.

You can see it in its full glory here:

Basically, I have succeeded in synchronising certain assets between different clients running an instance of a runtime level editor.  So pretty cool! (at least I think so).  Essentially very little done here on my behalf, merely mashing together two codebases (the bolt network library and some code I purchased from somewhere in the internet) until something fell into true.

I also found the BoltEngine cheatsheet extremely useful in this: https://docs.google.com/document/d/1CvN1E2GOvd_AHnkFOSMSJTR96EBGhPRDPgPMiOuJoSg/view#.

So in terms of the above objectives for the first milestone:

  • Plumb in the multiplayer functionality from my previous project.
  • Introduce a simple and straightforward database model for players and dungeon masters.
  • Allow players to spawn in a world in an appropriate way (without falling forever).
  • Allow the dungeon master to move players around.
  • Allow the dungeon master to switch their camera to a creature under their control and move it around.

I suppose I could say that I’ve mostly met the first and second of these.  There is a little bit more work to do in synchronising textures and working on the client UI, as well as determining how much control a client should have over editing, and whether the host/server should determine privileges, but that is well on its way now – I’d say at least 50%, and maybe 75% done.

Next steps would be to provide the ability for clients to drag and drop avatar tokens (first person controller prefabs), and then provide them the option to select one of their avatars and ‘avatar in’ to first person perspective, then allow them to ‘avatar out’.

After that, more segregation of privilege work, in terms of:

  • first figuring out how to allow client A to move certain entities around, which client B (which might be the same as A) has placed.
  • then figuring out how to isolate this power to a particular client (which would likely be the server) iff A is not equal to B

But as you can see, this work is finally on its way!  Very exciting!  It only took three years =P

Rewilding as the logical outcome of civilisation

March 18, 2018

I believe that a truly advanced civilisation should not be visible within the confines of an biosphere, at least per a modern human’s perception.

Consider human civilisation – from hominids roaming Europe, Asia and Africa hundreds of thousands of years ago, with modern-ish humans emerging about 100 thousand years ago, and modern humans about 20.  The first small agricultural communities, perhaps 10 to 15 thousand years ago.  The first cities and empires, 5 thousand years ago.  Industrialisation, about 300 years ago.

If risks are managed sensibly, I see that a logical steady state approach would be to seek not to over-extend use of resources, and restore if possible pre-existing ecosystems, or create new ones.

Likely, as with the move to cities, our descendants may likely move to city memory and only ‘avatar’ to one or multiple cyborg bodies if and when required.  Consequently, although cities may have several orders of magnitude more inhabitants, performing at a cognitive and intellectual level far beyond modern humans, their experiences and chosen interfaces will largely be virtual or simulated.  Moreover, such cities could essentially be much more compact than modern cities, as they would essentially be server farms with robotic maintenance infrastructure.  These might run on nuclear fusion or casimir pumps.

For reasons of ecotourism and keeping things tidy and ordered, such future civilisations spanning the globe and solar system may decide to let nature reclaim the large tracts of land previously given over to agriculture and more primitive infrastructure, in an ordered but apparently chaotic fashion.  This principle of ecological custodianship would possibly also seek to revive extinct species by first principles, or construct new ones if and as required to fill ecological niches and improve the aesthetic of the apparently wilder world.

Tunnels through space may well be constructed to transport the few goods and resources required for maintenance, obviating the need for traditional shipping (road, freight and rail).  Hence these could be left to decay and return to the wilds, or even deliberately dismantled.

Eventually a machine civilisation with citizens in city memory could potentially surgere off miniature pockets of reality for use, connected by one or multiple small umbilicals to base reality.  It seems likely at said point that at this stage Contact may well have been attained, and this civilisation would no longer be bound to the confines of our Solar System.  On Earth, devices to maintain said umbilicals would not need to be as large as the server farms running the prior cities.   Casimir pumps or more advanced forms of energy production would be consumed to power the civilisation.  Populations would continue to increase by several orders of magnitude over the previous machine civilisation, however the maintenance infrastructure and much of the relics of the machine civilisation would not be required.  Custodianship processes could accelerate, and managed gardening of the world by a branch of some part of the future government might pick up.  Apparent chaos would continue to increase.  Pocket universes with recreations of old cities might be generated for avatar tourism.

Miniaturisation and abstraction of umbilicals would continue.  Eventually, all interfaces between the base reality and said future civilisation would be essentially invisible to a modern human eye.  The entire planet would contain only the relics of prior settlement that the civilisation wished to leave or preserve.  Any copies for archaeological purposes would have long since been backed up at various levels of abstraction or recreated in pocket universes that might be smaller or larger than the base.  Everything else would be restored and wild.  Atmospheric gas concentrations would be carefully balanced, and ecosystems would be designed to be as stable as possible with negligible intervention.

Consequently if a modern human were to step into a time machine and emerge on Earth say at some sufficiently distant point in the future (700 years?  1500 years?) they could well see no sign of civilisation at all.  However, it would be there, running at a level of sophistication and subtlety well beyond the wildest predictions of such writers as H.G. Wells.

Various Projects, Status & Rundown

February 27, 2018

Here is a brain dump of my various projects, and progress to date on them.

Unity 3D game

  • I was enthused to learn that Unity-Technologies have taken over development of GILES as of 12 days ago (which was originally a ProCore3D project), here: https://github.com/Unity-Technologies/giles .  I have updated my copy of GILES to run against Unity 2017.3.1
  • I have been furthermore encouraged to learn that per https://forum.photonengine.com/categories/bolt-engine that BoltEngine is now almost up to speed with the latest Unity, and that there will be a Sample Pack released soon (which should make my job of adapting the tech to my purposes simpler).

Phoenix/Elixir app

  • I have started playing around with Phoenix.  My current goal is to build a scaffold for an app where users can submit billboards, with topics, with posts, with attachments.  The data layout for this has largely been done and instantiated via ecto/postgres.  Hopefully I can progress this further by building out the controllers and the views over the next month or two.

Facebook events app

  • I have been spending some time learning a bit about android, in particular, Facebook app development.  My goal is to build an app that can query the graph api, and only extract and present data to me that I care about from Facebook.  This will have a couple of benefits to me – 1, that Facebook will become useful to me again, and 2, it will provide me with practical knowledge in terms of building Facebook apps.
  • Currently I have built a normal Android app, as well as a Facebook app placeholder.  However I am finding that there is a namespace conflict between the two apps; likely I will need to start a new project from scratch, rather than copying and then editing the namespace by hand.  Also, I intend to remove the need to ask for permissions from the app; I dislike apps that ask for permissions they do not need.

Machine learning reading and education

  • I’m currently reading the very approachable Introduction to Statistical Learning (http://www-bcf.usc.edu/~gareth/ISL/), and doing the exercises in R.  My goal is to work my way through this book, and make a start on Elements of Statistical Learning, before looking into potentially undertaking a microdegree or two later this year.

Language Learning

  • I’ve been trying to learn a bit of French by working my way through the Michel Thomas course on same.  I’ve worked my way through the audio course once now; I will aim to work my way through it one or two more times before the end of the year.
  • I’m thinking of learning a bit of German the same way, with the sibling Michel Thomas course.
  • My goal by the end of 2019 is to be able to read Le Monde to a degree (http://www.lemonde.fr/) as well as Der Spiegel (http://www.spiegel.de/)

Research project, arithmetic topology

  • Preliminary progress has been made.  This is largely on the backburner for now, but I should probably make a bit of an effort to properly write up what I have at some point.

Vertical farming: practical with fusion power

September 17, 2017

Vertical farming has the potential to provide a number of advantages over conventional farming:

  • Cut down on supply chain costs
  • More efficient use of land
  • Rewilding of large amounts of wilderness outside of cities
  • Fresher produce more readily available

However, there is a bit of a problem – the energy expenditure required to make it practical renders it more expensive to grow crops in skyscrapers, both economically and environmentally, than conventional farming.  Wheat requires a large amount of energy to grow, as do legumes.

However, if, with any luck, we have the ability to construct workable nuclear fusion power plants by 2030, the nature of the game changes, as we would have readily available much cheaper, cleaner energy than through other power sources (with the exception of the Sun, which is also, of course, providing energy emitted by nuclear fusion).

If such does become practicable, large amounts of existing land given over to farming could be returned to the wild, and established as national parks and nature reserves.  This would be greatly useful, as it would allow us to allow the lungs of the world to regenerate, as well as providing large areas for ecotourism for city dwellers to appreciate on a more primal level.

Management of the cutover from large scale agricultural operations to vertical farming would be likely to take decades, but I could imagine that by 2070 or 2080 it should be possible to return a fair amount of land to the wild.

The tale of Little Red Riding Hood’s Grandmother and the three wolves

August 11, 2017

Once upon a time there was a little old lady that lived by herself in the woods.  ‘I wonder when Little Red Riding Hood is going to call she thought’. There was a rat-a-tat at the door.

‘Let me in, let me in!’ came a gruff voice.

‘Alright, alright,’ said grandma.  ‘Don’t get your socks off’.  She opened the door and there was a wolf.  ‘What can I do for you young man?’

‘I’m here to eat you,’ said the wolf.

‘Yes, you do look ravenous,’ said the little old lady.  ‘You should have some of my cookies first, they are straight from the oven.’

Confused, the wolf grabbed a handful of biscuits and scoffed them down.  ‘Wait a minute, is that chocolate?’

‘It is’, said the little old lady.

*

A little bit later, the second wolf came along to the little old lady’s house.  Rat-a-tat, rat-a-tat, he went at the door.  ‘Coming!’ said a voice.  ‘No patience, young kids these days…’

‘What can I do for you, young man?’ she asked when she opened the door.

‘I’m here to eat you,’ said the second wolf.

‘Oh, that sounds marvelous.  But first you must come inside and get warm.  You’ll catch a chill out there.’

Confused, the second wolf allowed himself to be herded into the house, and into the kitchen.

‘You will be nice and comfortable here in this room.  I think there is a thermostat somewhere,’ the lady said.

Obediently, the second wolf climbed into the tiny room offered.  Then the lady shut the door to the oven and put a pot of tea on the stove.

*

A little bit later, the third wolf came to the door.  Knock-knock, knock-knock.  ‘Yes, yes.  So many visitors.  A lady can’t get any peace around here.’

The door opened, and the third wolf saw a lady warming her hands on a cup of tea.  ‘Yes, so what can I do for you young man?’

Now the third wolf was a bit hard of thinking, and a bit daft.  ‘You know, I really can’t remember,’ he said.

‘Perhaps you would like to fetch this stick for me?’

And they both lived happily ever after.

Consider not what is lost, but what is GAN

July 7, 2017

A generative adversarial network is a set of two neural networks competing against one another in a zero sum game (todo: consider game theoretic implications).  Classically one is a generator, and one is a discriminator.  The generator tries to fool the discriminator.  However this situation can become unstable and the algorithm can fail to converge to interpret / generate digits, say.

There is another recent paper http://journal.frontiersin.org/article/10.3389/fncom.2017.00048/full on cliques of neurons and their role in consciousness, a paper that draws links between algebraic topology and intelligence.  In particular, it indicates that there are stratified layers of sets of neurons that can be up to 11 strata deep.

So the question I pose is twofold: one, is there a way to avoid instability in a classical GAN and also extend it to broader classes of problems; and two, can we consider the latter paper’s analogue in AI to be stratified layers of sets of neurons in an underlying ‘space’ of potential usable neurons, that can be up to 11 strata deep, and which can be created, read, updated and destroyed, as time buckets continue?  Maybe we could implement using dind (docker in docker).  All sorts of possibilities.

I think this broader idea of ‘GAN’ would be useful, as it would allow us to remove neuron sets that are underperforming / have become unstable, and also allow for new models to arise in a Conways Game of Life fashion out of the morass.

But how do we measure, how do we update, what data are we measuring against?  All excellent questions.

Perhaps then we need to consider the set of possible usable neurons as living in a module that has an input (a set of performance measures at time step T) and an output (a set of the same performance measures at time step T+1).  Based on this difference we can apply a feedback loop to the universe of inputs.  Maybe we output the algebraic structure of the set of adversarial subnets in the universal net as a json object say (with the model parameters included eg matrix representations etc, so

{json(T)} as input, and {json(T+1)} as output

where some nodes from the json file might be removed, and others added.

eg. { {{ [network including 2 sets of 4 nodes, 2 layers deep], A_ij }, { [network including 3 sets of 3 nodes, 3 layers deep], B_ij } }, { [network with 2 x 4 nodes ], D_ij } }(T)

{ { [ network including 2 sets of 4 nodes, 2 layers deep ], A_ij }, { [network including 3 sets of 2 nodes, 3 layers deep], C_ij } }(T+1).

One could have any sort of network within the universal neural space, eg a CNN, an ordinary hidden markov model, etc.

So we have a big box, with json objects and current training parameters of same being spat in and out of the box.

The training parameters of the transient submodels could then be placed inside some form of distance measure relative to the problem we are optimising for (i.e., relative to one’s timeboxed dataset or evolving input).  It strikes me that this sort of approach, although difficult to implement, could potentially lead to applications much more wideranging than image manipulation.

A further generalisation would be to allow the model to create an output timeboxed dataset, and allow that to form an extrapolation of previous input or data.  This sort of approach might be useful, for instance, for performing fairly demanding creative work.  One would naturally of course need to seed the task with an appropriate starting point, say a collection of ‘thoughts’ (or rather their network representation) inhabiting a network of cliques in the universal neural lattice.

 

The current state of open source AI

July 2, 2017

I recently discovered the company OpenAI, which has a significant endowment, and has a couple of notable projects that enable work on training / testing AI algorithms.  In particular, these projects are Gym (https://github.com/openai/gym) and Universe (https://github.com/openai/universe).  Gym is based on this paper (https://arxiv.org/abs/1606.01540) which is essentially a toolkit for benchmarking the performance of various AI algorithms in reinforcement learning.  From the project page:

gym makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano.

Universe fits in with Gym, in that Gym provides the interface to Universe, which provides an AI algorithm with an environment within which to train.  An example of this is given on the project page, which essentially demonstrates use of Universe and Gym to test an agent running the game DuskDrive in a Docker container:

import gym
import universe  # register the universe environments

env = gym.make('flashgames.DuskDrive-v0')
env.configure(remotes=1)  # automatically creates a local docker container
observation_n = env.reset()

while True:
  action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n]  # your agent here
  observation_n, reward_n, done_n, info = env.step(action_n)
  env.render()

Evidently this is potentially quite powerful.  Indeed, from the main promotional page for universe (https://universe.openai.com/), apparently there are already more than 1000 environments to test / train learning agents against, which apparently many many more in the works.

To get started, one could use the openai code in the projects https://github.com/openai/improved-gan or https://github.com/openai/universe-starter-agent, however, those with a more sophisticated bent might instead be interested in investigating DeepMind’s project https://github.com/deepmind/learning-to-learn, which makes use of Google Tensorflow and DeepMind Sonnet (https://github.com/deepmind/sonnet).  Sonnet is a Tensorflow neural network library.

Towards increasingly unorthodox approaches, I think it would be fascinating to watch the space around the development of algorithms to take advantage of the architecture of IBM’s neuromorphic chips (based I believe on Field Programmable Gate Arrays), or looking a bit further out, towards the opportunities based in machine learning associated to Topological Quantum Computers (Majorana fermion based), and possibly same with a slight Qudit boost (hybrid/pseudo tetracomputer, maybe with switches consisting up to a 100 states).

I will continue to follow developments in this area with some interest.

Cryptography in the Quantum Computing Age

June 16, 2017

There is a problem with much of modern internet communications at the moment.  The problem is that the security of same relies on factoring being hard.  And, this is the case for a conventional computing machine.  However, for a quantum computer, such as may become available to corporations and governments within the next 5 to 10 years, it will be easy.

So this necessitates the need for a ‘refresh’ of the algorithms maintaining internet security.  So, how can we do this?

How RSA works

Well, recall the following about how RSA works (from the wikipedia entry):

  • Take two very large primes (100s of digits long) and multiply them together.   (n = pq, p and q prime).
  • Compute h = lcm(p – 1, q – 1)  (this is easy via the Euclidean algorithm, as lcm(a, b) = ab/gcd(a, b) ).
  • Choose e st e < h and gcd(e, h) = 1 (again, easy via the Euclidean algorithm).
  • Determine d st ed = 1 (mod h)  [ so that d is the modular multiplicative inverse of e ]
  • e is then the public key.  n is also released as public.
  • d is the private key.

Encryption process:

  • Bob obtain’s Alice’s public key e.  With a message M, he converts it into an integer m.  Then he computes c = m^e (mod n)

Decryption process:

  • Alice obtains Bob’s message c, and decrypts it as c^d = (m^e)^d = m (mod n)
  • Then m -> M is easy if the padding scheme is known.

The Achilles heel exposed by a quantum computer

The main point about the above that makes RSA hard to break is that factorisation is hard.  But, if factorisation is easy, Jill can intercept Bob’s ciphertext c, and, given her knowledge of n and e (which are public), factor n to p and q, compute h, and then identify d as the multiplicative inverse of e mod h.

Consequent to this, she can then decrypt Alice’s message as above.

So, if factorisation is easy, RSA is in a bit of a pickle.  But is there a better way to encrypt?

RSA over an integer polynomial ring

Process:

  • Take two very large irreducible polynomials over the integer polynomial ring Z[x] (100s of tuples long) and multiply them together.   nb. for extra security, you may want to ensure that the coefficients of these polynomials are each at least 100 digits long. (n[x] = p[x]q[x], p[x] and q[x] prime).
  • Compute h[x] = lcm(p – 1, q – 1)  (this is easy via the Euclidean algorithm for integer valued polynomials, as lcm(a, b) = ab/gcd(a, b)).
  • Choose e st e < h and gcd(e, h) = 1 (again, easy via the Euclidean algorithm).
  • Determine d[x] st e[x]d[x] = 1 (mod h[x])  [ so that d is the modular multiplicative inverse of e ]
    • Note: This is hard, unless factorisation is easy.
  • e is then the public key.  n is also released as public.
  • d is the private key.

Encryption process:

  • Bob obtain’s Alice’s public key e.  With a message M, he converts it into an integer m.  Then he computes c = m^e (mod n)

Decryption process:

  • Alice obtains Bob’s message c, and decrypts it as c^d = (m^e)^d = m (mod n)
  • Then m -> M is easy if the padding scheme is known.

Is this vulnerable to attack by a Quantum Computer eavesdropping on a channel carrying an encrypted message?

I claim not, as the size of the set of irreducible (prime) polynomials over the integer ring is much bigger than the set of primes.  The cardinality might be the same, but the complexity just seems intuitively to be much greater to me.  Factorising a semi-prime polynomial, which as factors has prime polynomials with possibly 100 or 200 terms, each with coefficients up to 150 digits in length, is, I suspect, much much more difficult than factorising a semi-prime integer.

If one had a tetra-computer (“hyper-computer”), the story might be different, but I suspect we will not have sufficiently powerful variants of those until 2050.