“Space is big. You just won’t believe how vastly, hugely, mindbogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.” – Douglas Adams, The Hitchhiker’s Guide to the Galaxy
In the previous two parts I discussed two ideas:
That the universe is finite.
That any computable mathematical structure exists in the same sense as our universe exists.
I would now like to suggest the general form of structure our universe takes: that of a network or graph. I’ll be considering a network to be a finite number of points, some of which are connected and some are not. That’s it. For example:
Clunky graphics I know, but I’m working on it.
This is a much simpler model than we would expect our universe could be, for example: points could come in different types which we could label red, blue or green say; connections could similarly come in different types; triplets of points could be joined by a triangle. Let’s just suppose that our universe is a network which has various such properties, one of which is that some but not all of the nodes are connected and that we’re only going to follow those connection patterns. In other words, limiting this discussion to this simple model does not mean other modelling complexities don’t exist.
In the diagram there is a lot of empty space. Spaces outside of the lines and points are not part of the network. Within the universe of the network are only lines and points. The points have no size and the lines have no width and their apparent length has no relevance. In fact the lines don’t even exist (they couldn’t because they are continuous) they just visually represent the fact that two points have the property of being ‘connected’.
Time is introduced by way of an algorithm that changes the pattern of connections in discrete steps. In this way the universe can evolve and patterns can emerge.
Distance in this universe is defined by the connections. Each connection is taken to be of the same ‘length’. From this comes a notion of distance between points: the minimum number of joins you have to go through to get from one point to another.
Space and matter within the universe are both part of the network. Space can be conceptualised as elements of the network that align themselves into a somewhat chaotic array that at a global level form a somewhat regular arrangement. For example they may tend to form a somewhat regular three-dimensional grid. Particles can be thought of as a stable geometric structure of points and joins that are not part of the background arrangement that defines space.
If our universe is fundamentally based on a structure that includes such a network we can start to make deductions about how our universe should look.
I’ll be using the Weak Anthropic Principle (WAP) which states that the physical properties of our universe are not equally likely. They are restricted to properties where life such as ours can exist. They are also more likely to be properties where life is more likely to be abundant than properties where it’s less likely to be abundant. (Forget the Strong Anthropic Principle, it’s off with the pixies).
Let me give you a simple example. Suppose there were precisely two universes that exist. One of them is more conducive to life and evolves 100 independent civilisations that developed a philosophy of existence. The other one only evolves one. If this were the case then we would presume that we exist in the universe holding 100 civilisations as this is far more likely than us happening to be one civilisation of the second universe. So this gives us a (very rubbery) prediction: we are ‘more likely’ to be living in a universe with other civilisations. Nevertheless we may well be the only civilisation in our universe, in fact recent results are pointing a bit in this direction.
Let’s instead restrict ourselves to universes that follow our ‘time’ algorithm, bearing in mind that we are assuming every possible mathematical structure exists as a universe. How do we enumerate them? The feature that distinguishes between them is the starting configuration. These can consist of very few points or many, with all the different patterns of connection. Each universe evolves algorithmically from its initial state (that is, without randomness and I’ll discuss quantum mechanics in a minute) to its own set of planets, stars, galaxies and so on. They will all have the same neutrons and protons and so on (or maybe not, but that doesn’t matter). Which ones are we more likely to be in? By the argument above we can say the bigger ones!
But now we have a problem – how big? Since we presume every finite universe of our algorithm exists, then there really is no limit to how big it might be. Also, there are vastly more big initial states than small initial states – here I’m defining big to mean number of points in the network. So the first prediction is that our universe should be really big. No, bigger. In fact it should be ridiculously obscenely wastefully living-like-there’s-no-tomorrow big.
First prediction: The universe is big – check.
And that’s just our observable universe, what about what’s beyond?
Second prediction: The universe beyond our observable universe will make the observable universe look like a gnat.
(The current theory of cosmic inflation suggests the relative size of unobservable to observable is 10^23.)
But there’s a problem here. No matter how big the universe is, nicely backing up this philosophy, we’re left wondering why it’s actually so small. There are a vastly greater number of universes larger than ours, than there are smaller ones, no matter how big ours is. In fact we might like to make statements such as: the universe should have been larger than ours with probability one.
And yet it must also be finite with probability one. So what’s going on here? When we talk about which universe we happen to have evolved in, we’re implicitly thinking of some form of selection process: of all the civilisations in all of the universes, we are one random one. That’s fine for finite collections but, as I’ve tried to convince you, since the set of all possible initial states is infinite, then the set doesn’t exist. And you can’t pick from a list that doesn’t exist. We can only pick from finite collections of universes.
So now we have our large universe and let’s now restrict ourselves to all of the universes that start with our initial state, follow our same algorithm, but perhaps have slight once-only anomalies in their algorithm. Consider a universe identical to ours but at one point in time a long while ago the algorithm made one special case to deviate and defined a teapot to suddenly pop into existence in orbit around the sun. After this occurred, the two universes’ futures slowly diverged and by the butterfly effect different civilisations emerged on earth. Which one would we be in? I’d say they were equally likely. Let’s take it further: there are vastly more universes where a single teapot appears somewhere than the one universe where it doesn’t. So why don’t we see any teapot? That’s a profound and difficult question but I’m going to try to answer it.
Suppose instead of a teapot appearing, we consider all possible random shaped lumps of stuff that could pop into existence. These have many more configurations than teapots and hence define more universes, so let’s instead consider universes with random lumps. There will be a huge number of random lumps if you distinguish differences at the microscopic level. Still more if you go to the particle level. Best of the lot is to go all the way to the Plank length and now the lump of stuff is actually random alterations in individual joins in the network. This is the type of universe we should be in, one where a lump of stuff, random at the Plank length, appears at one point in time.
But we don’t need to stop there. We can generate many more possible universes by having the random lumps appear everywhere, all through time. That’s what our universe should be like.
Taking things to this limit creates a couple of issues. Firstly, there must be restrictions on the algorithms that define all the universe-branching. Complete randomness in changes will lead to a universe staying completely random and no stable structures can emerge. This is not the case in our universe. For us the wave propagation of a particle wave combined with its ‘random’ wave function collapse is a restricted randomness that leads to stable development of complex organised structures.
Secondly, as we introduce all of these random lump exceptions that apply to our universe, the algorithm becomes longer and longer without limit. Such an algorithm can’t exist because it’s infinite in length. This can be solved by discarding the idea of the algorithm defining one randomised universe. Instead we consider the algorithm that methodically defines all such randomised universes, turning it into a nonrandom algorithm. Ironically this brings the algorithm back to finite length and the mathematical structure it creates contains all possible time line branches in one fixed structure.
This concept of branching universes already exists in quantum mechanics and is called the many-worlds interpretation. For creatures that evolve within this multiverse, they are only aware of the path they are on and if they develop insight into quantum mechanics they will be confronted with apparent randomness caused by branching.
Third prediction: We live in a quantum mechanical universe – check.
This prediction is pretty much the same as the prediction that the universe is big. In this case big means multiple time lines.
Finally, why should we expect the universe to begin with a Big Bang? We’ve already covered the Big part, now the Bang. This time let’s restrict ourselves to all universes that follow our universe’s algorithm and that start with the same number of points. What patterns are likely among all the initial patterns of connections? Overwhelmingly most of them will be random in structure and it turns out that random structure networks are almost always what’s called a small world network. A commonly known instance of this is the Kevin Bacon Game (or six degrees of separation). If you consider the network points to be Hollywood actors and lines joining points to denote “appeared in the same movie” then pretty much all actors are linked to all other actors by no more than six joins. In the case of our universe, we’d suppose that our initial state is almost certainly random and would therefore be extremely small. If random, the size of the universe follows the rule
The typical distance between points is proportional to the logarithm of the number of points.
The proportionality depends on certain things such as the number of connections.
Consider this back of the envelope estimate. The observable universe is estimated at about 10^80 cubic metres. If a network join is of the order of Plank’s length of 10^-35 metres then, if the network approximates a uniform lattice, that would give an order of 10^105 nodes per cubic metre (10^35 cubed), and hence 10^185 nodes in the visible universe (10^105 x 10^80). By the formula above, the initial state of the universe was therefore of order 10^2 Plank lengths wide (the log of 10^185=185=10^2), which is 10^-33 metres. To give perspective, a proton is 10^-15 metres wide, 1,000,000,000,000,000,000 times wider than the initial visible universe.
So after this initial tiny state the algorithm cuts in and it evolves. An algorithm can de-randomise the network. Of all the possible algorithms, some could make the universe smaller but some could cause points to distance themselves. It’s hard to imagine a civilisation emerging in a small world network and so, for us to exist, we needed to be in a universe with an expanding algorithm. While this philosophy doesn’t suggest how fast the universe should expand, we can note that with a natural time step equal to the Plank time of 10^-44 seconds that one would presume that an expansionary algorithm could create a big universe very very quickly.
It is my intention to search for such an algorithm. Not the one that defines our universe but one that merely demonstrates a proof of concept: that it is possible to start with a random network and apply an algorithm that makes it expand explosively fast, say in well under a mere 10^44 iterations. If I do come up with one then there’ll be a part 4 to this series. In the meantime if anyone out there knows about this or other network evolving algorithms, or better still comes up with a solution themselves, I’d love to hear from you. I’ve had a bit of a look but there seems to be precious little out there.
So there you have it: big universe, quantum mechanics and a Big Bang.