Posts Tagged ‘exotic geometry’

Some thoughts regarding M-theory

June 14, 2016

One of the things that I was thinking a bit more about recently was the nature of what the theory previously known as string theory is supposed to be about.  There is some murky intuition – that Feynman diagrams should somehow be lifted to dealing with Teichmuller spaces, and the amplituhedron, and such like.  But it has occurred to me that this is rather a straw concern for what is really at the heart of the matter, or the structure that people are concerned about here.  For it seems that there is a category error, so to speak, or a particular cognitive fallacy, in that the pre-school analogy for the structure associated to what people are seeking to study, is viewed to be erroneously one and the same as the true underlying structure.

This is not obvious, but it has become slightly clearer to me now over the years, now that I have had time to properly study exotic geometries and their associated geometric invariants, information theory, and higher category theory.  Unfortunately, the malaise associated to this has become a rather peculiar game of information and disinformation – it is, in fact, hard to tell where the pieces sit.  But, as always, it is helpful to be guided by good mathematical and physical intuition.  Truth is truth.

So, what is string theory / M theory – what is it actually supposed to be about?  Well certainly one simple statement that I could make would be that it is an extension of L theory (concerning which there are many excellent results due to Ranicki), which in turn is an extension of the K theory which many deeply talented 20th century mathematicians have contributed to, such as Atiyah, the Bourbaki society, and Grothendieck.

But that is just replacing a term with a term.  So, what is K theory?  It is the cohomology of schemes.  And what is cohomology?  That tells you roughly how to categorise a differential form in a local chart on a manifold.  A scheme, per my admittedly simplistic viewpoint, can be viewed as an object which, within a local chart, may be associated to the twisting of two algebraic varieties (which can be viewed as charts of an Alexandrov space).  So, if {(x, y, z) | f(x,y,z) = 0} is one variety, and the other is {(x, y, z) | g(x, y, z) = 0}, then a scheme might be given by {(x, y, z) | \circ (f ; g)(x, y, z) = 0 }, where \circ (f ; g) is the composition of f by itself g times.  And there is in fact more than one way to construct an exotic twisting of f by g (there are, in fact, three): we can also act on f by g with repeated multiplication, or with repeated exponentiation.

So that is K theory, and that is schemes, or 1-schemes.  So, what about L theory?  Well, L theory concerns itself with 2-schemes, which can be constructed by the twisting of three 1-schemes together in one of five different ways – with operators \circ, \circ_(2) (higher order composition / tetrated composition), \star, \wedge, and \wedge_(2) (higher order exponentiation / tetrated exponentiation, or pentation).

Hence we have 2-schemes, and L theory.  There is a deep, deep well of structure waiting to be explored in that area alone.  But what about M theory?  Well, M theory concerns itself with 3-schemes, which are constructed by the twisting of five 2-schemes together in one of seven different ways – with the operators above, as well as \circ_(3) (pentated composition) and \wedge_(3) (pentated exponentiation, or heptation).

However, this is a slight simplification, because we also have exotic operators that act on 3-tuples of functions (rather than 2-tuples as above), as well as on 5-tuples of functions (these operators are related to pre-cursor geometries for cybernetic and meta-cybernetic structures, respectively).

Evidently the theory here is fantastically, wildly, stupidly deep.  The structure is gloriously fascinating and elegant in its form.  And, with this viewpoint, one is now prepared to read the papers of the leading lights of the field with a new perspective.

Additionally, however, we can ask the question – why jump from K theory straight to M theory? What is the reasoning for that?

I can hypothesise an answer – it is because, perhaps, that M theory is a 3-categorical construction – it makes most sense in the language of 3 categories.  Just as K theory deals with 1-categories, and function spaces as being the atomic objects, and L theory deals with 2-categories, and function-function spaces as being atomic (ie, functionals forming a basis for the space), we have that M theory deals with function-function-function spaces, or meta-functionals as a basis for the space.  Hence, the geometric invariants of M-theory are meta-meta-functionals.

Now, for reasons that I think it might be possible to bludgeon into logic, one might by a peculiar association conflate the four levels of subtlety traversed so far – ordinary set theory, or 0-category theory, through to 3-category theory – as being part and parcel with the dimensions of ordinary space.  So therefore the study of M theory is motivated as one can view the structure or scaffolding of the abstraction in itself as manifesting the underlying structure of reality – or the meta-information associated to the theory.  Which is really, really weird if you think about it.  But maybe this is the case.

Another reason that the jump from K theory to M theory might have been made was for some reason of strange quixotic quest to discover a ‘theory of everything’.  But the problem is, this is only the fourth level of subtlety – there are a fair few more steps to traverse before one gets to aleph null.  Another four more, at the very least!

So that is my current thinking regarding this particular circus.  L theory and M theory, with their associated higher categorical abstractions, are areas of mathematics that are nowhere near as deeply plumbed as K theory, and which have at least as much structure, if not more.  Since K theory concerned a generation of the finest minds on the planet for a number of decades, I can see that these other areas could definitely warrant a certain degree of attention and respect.

It is perhaps unfortunate that such noble structures as these have been given short shrift in terms of the way they have been portrayed as to their applications to physics.  There are probably two reasons that things have become slightly strange here – one is that the public at large, and, in particular, the educated public, are a touch or two smarter and informed regarding physics than most full time intellectuals might be prepared to give credence.  However, they are not perhaps sufficiently interested in the large as to reason their way through the stick figure interpretation of the phenomenology associated to higher categorical descriptions of reality, and actually see that there might be more to it than ‘vibrating modes of a ‘string”.  The fact is, to my mind, there are no strings – this is just an artifact perhaps of conflating the arrows in a commutative diagram within higher category theory with actual physical objects, combined perhaps with some confusion as to tubes in Feynman diagrams.

The other reason is that the mainstay of academics that concern themselves with this sort of thing are perhaps not as informed as perhaps they ought to be.

Indeed, it is a pity, because the discipline suffers for the lack of an adequate popular science type explanation for many of these things, as well as a faithful portrayal of the associated objects.

A further confusion is that the structures themselves should describe the notion as to how to define invariants for them.  Or maybe that the way that invariants or physical principles are developed is considered as part of the theory.  But the received wisdom on this seems to have a certain lack of narrative in regards to its positioning, which can lead to confusion.  For instance, there is a fair bit of promising work in other areas of physics, such as within the information sciences, and machine learning, that could inform the discussion as to how to derive from first principles geometric invariants for a given structure.

Some thoughts on developments in enabling technologies over the next fifty years

May 7, 2015

Hi folks,

I thought I might share a few thoughts with you today on something that I’ve been thinking about for a while.  It is a fairly standard problem in computational mathematics, and arises naturally when dealing with solutions to partial differential equations with geometric invariants involving tensors of relatively high order (4+, eg, elasticity tensors in materials science).  Namely, the matter of solving these equations over a 2 to 4 dimensional domain using finite element methods, ie, numerically.

It is simple enough to solve Laplace’s equation or the heat equation over a square domain, but the problem comes up when one increases the number of dimensions – or when one introduces a non-trivial (read, non-flat) metric tensor, such as when exploring numerical solutions / computational simulations of solutions to the equations of general relativity.  For the former problem, such is easily solved on a desktop computer; for the latter problem, one needs a relatively powerful supercomputer.

Why? The key problem is the fact that, as one increases the number of dimensions in the classical approach to solving a system of equations numerically, the time to converge to a solution (if a solution exists, but that’s another matter, entirely, involving the growth and control of singularities) increases exponentially.  For instance, for a square domain 100 finite elements wide, one needs to perform operations on 100 x 100 units maybe 50 times until convergence.  For a cubic domain, it is 100 x 100 x 100 units, another 50 times.  For a quartic domain, it is 100^4 units, another 50 times.  So the issue is clear – it is devilishly hard to brute force solve problems of this nature.

For this reason, contemporary approaches tend to use sparse matrix methods (for ‘almost flat’ tensors) or impose a high degree of symmetry on the problem domain under exploration.  But this doesn’t really solve the underlying problem.  Surely one must be able to find a better way?

Well, maybe with quantum computers one might be able to.  A recent paper (2013), as well as this slightly older document (from 2010), suggest one can solve linear systems of equations exponentially faster using a computer of such nature.  So, if true, this is encouraging, as it would render computational problems like this slightly more tractable.  The developments at Microsoft Station Q on topological quantum computers, and the progress on more conventional quantum computers by researchers at other labs, such as IBM, who recently made a breakthrough in error correction (extending possible systems of qubits to the 13 to 17 qubit range), suggest that the era of quantum computing is not too far away – we may be less than 20 years away from desktop devices.

In a moderately more speculative vein, I am intrigued by the more futuristic possibility of going beyond quantum computing, into the domain of what one might call hyper computing.  This would deal with systems of ‘hyper qubits’ that are ‘hyperentangled’.  That’s probably a bit vague, but I sort of had in mind atomic components of the system that had multiple quantum numbers, each of which were entangled with the other atoms of the system, as well as entangled internally.  So essentially one would have two ‘strata’ / levels of entanglement.  The key idea is that it might be possible to scale computing power not linearly as with bits, or exponentially as with qubits, but as 2 tetrated by the number of ‘hyper-qubits’ hyper-entangled.  That would be a stupendous achievement, and, yes, futuristic.  At the very least, it would make problems such as what I have described above much, much easier to solve, if not outright trivial.

For a slightly better idea of how this might work, the general idea might be to entangle a system (such as a pair of photons) in N degrees of freedom, as opposed to merely one for a standard qubit, and then consider then entangling this system with M copies of same.  So there would be two strata to the machine, ie, it would be an (N, M) hyper-entangled system.  Then, if one could scale N and M to sufficiently high extent, potentially by increasing the complexity of the system’s constituent ‘atomic systems’, then I suspect one would essentially have a system that grew in terms of some number tetrated by some other number, the latter of which would grow as some monotonic increasing function of N and M.