Consider not what is lost, but what is GAN

A generative adversarial network is a set of two neural networks competing against one another in a zero sum game (todo: consider game theoretic implications).  Classically one is a generator, and one is a discriminator.  The generator tries to fool the discriminator.  However this situation can become unstable and the algorithm can fail to converge to interpret / generate digits, say.

There is another recent paper http://journal.frontiersin.org/article/10.3389/fncom.2017.00048/full on cliques of neurons and their role in consciousness, a paper that draws links between algebraic topology and intelligence.  In particular, it indicates that there are stratified layers of sets of neurons that can be up to 11 strata deep.

So the question I pose is twofold: one, is there a way to avoid instability in a classical GAN and also extend it to broader classes of problems; and two, can we consider the latter paper’s analogue in AI to be stratified layers of sets of neurons in an underlying ‘space’ of potential usable neurons, that can be up to 11 strata deep, and which can be created, read, updated and destroyed, as time buckets continue?  Maybe we could implement using dind (docker in docker).  All sorts of possibilities.

I think this broader idea of ‘GAN’ would be useful, as it would allow us to remove neuron sets that are underperforming / have become unstable, and also allow for new models to arise in a Conways Game of Life fashion out of the morass.

But how do we measure, how do we update, what data are we measuring against?  All excellent questions.

Perhaps then we need to consider the set of possible usable neurons as living in a module that has an input (a set of performance measures at time step T) and an output (a set of the same performance measures at time step T+1).  Based on this difference we can apply a feedback loop to the universe of inputs.  Maybe we output the algebraic structure of the set of adversarial subnets in the universal net as a json object say (with the model parameters included eg matrix representations etc, so

{json(T)} as input, and {json(T+1)} as output

where some nodes from the json file might be removed, and others added.

eg. { {{ [network including 2 sets of 4 nodes, 2 layers deep], A_ij }, { [network including 3 sets of 3 nodes, 3 layers deep], B_ij } }, { [network with 2 x 4 nodes ], D_ij } }(T)

{ { [ network including 2 sets of 4 nodes, 2 layers deep ], A_ij }, { [network including 3 sets of 2 nodes, 3 layers deep], C_ij } }(T+1).

One could have any sort of network within the universal neural space, eg a CNN, an ordinary hidden markov model, etc.

So we have a big box, with json objects and current training parameters of same being spat in and out of the box.

The training parameters of the transient submodels could then be placed inside some form of distance measure relative to the problem we are optimising for (i.e., relative to one’s timeboxed dataset or evolving input).  It strikes me that this sort of approach, although difficult to implement, could potentially lead to applications much more wideranging than image manipulation.

A further generalisation would be to allow the model to create an output timeboxed dataset, and allow that to form an extrapolation of previous input or data.  This sort of approach might be useful, for instance, for performing fairly demanding creative work.  One would naturally of course need to seed the task with an appropriate starting point, say a collection of ‘thoughts’ (or rather their network representation) inhabiting a network of cliques in the universal neural lattice.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: