The Faster Than Light Hypothesis
By Theo Quinn
First Edition 2020.
Copyright © 2020 Dsvr. All rights reserved.
Cover art copyright © 2020 Dsvr. All rights reserved.
The author and publisher do not assume and hereby disclaim any liability to any party for any loss, any kind of disruption or any kind of damage (including but not limited to special, incidental, consequential or other damages) caused, or alleged to have been caused, by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause.
No part of this publication may be used or reproduced in any manner in any form or by any means, except in the “fair use” case of brief quotations embodied in critical reviews. Requests for permission should be addressed to email@example.com.
The author and publisher are offering this book for entertainment and/or informational purposes only, and not as any kind of advice, professional or otherwise. If you rely on any information from this book, it is solely at your own risk. The author and publisher make no representations, warranties or guarantees of any kind, express or implied.
Table of Contents
A speculative technology might finally give us Faster Than Light (FTL) travel. The proof, if we only knew where to look, could have been ours by now.
If hypothetical aliens could tell you how they traveled to Earth, would you be surprised? Would the same physics we hold dear be close to their hearts too? No, not really. According to Relativity, we’re pretty much stuck here in the Solar system, and can’t leave in any meaningful way.
Hence this book isn’t about Relativity or Quantum Mechanics. If you’d like to learn about them, the bookshelves are crammed. What I want to show you is why we don’t need either one, and why would that be very useful to us. That’s the FTL Hypothesis and what this book is about.
From simulated reality to basketball games, and from Artificial Intelligence to pastry shops, analogies abound. Yes, analogies are incorrect by definition. Yet all popular science books use them, and there’s a reason for that. So enjoy them for what they are – a help to visualize the idea.
The physics paper with FTL equations is at the very end in the “FTL Hypothesis” chapter. No analogies are used there of course, but it’s still a quick read.
Let’s talk about about hypothetical exploits of trying to reach other stars. Soon enough, you’ll see why this makes sense in the context of the hypothesis.
Suppose you’re a starship captain trying to increase the speed near Earth. You have some great propulsion tech, but still, you’re hitting a “brick wall”. The speed of light is the fastest you can aspire to.
Now, say you’ve got an idea. You’ll take your gleaming ship outside the Solar system, out to deep space. So this is your second attempt to see how fast you can go.
Well, you’re still hitting a “brick wall”. There’s this speed limit you seem to be stuck at. But lo and behold, this speed limit is higher. Maybe it’s 17 times the speed of light.
And as you go further out into deep space, you’re amazed the limit is rising. Your space speedometer is now showing 113 times the speed of light!
As you’re getting closer to your destination, a nearby star with a beautiful planet, your top speed is coming down, slowly but surely. Finally, as you enter the final leg of your journey, you’re back under the speed of light.
According to our science today, this story is purely fictional. It may not be.
In a nutshell, reality is a massively parallel computing system.
If true, we can travel the Galaxy the way you fly around the world.
The idea is that every particle, like an electron, is a kind of a computer of its own, and all of them operate in parallel.
Or simplified, the Universe is a computer. A very different one, yet it still works by processing information, just like any other.
How different is this from your garden-variety computer, the one you’re using to read this?
Well, a lot. I’ll give you a visual.
First, you probably know everything is made of particles like electrons and protons. That means you and all your stuff. Plus literally everything else.
Now imagine each particle as a super tiny computer. And all the space in the Universe as a kind of a network.
It’s like each particle has its information spread everywhere for other particles to pick up.
So they do, and they process this information, meaning they compute.
The one and only goal of this computation is to guide the particle. All of its motion comes from constantly computing.
And the motion of particles is reality. It would mean you’re computed too.
The information I am talking about isn’t ordinary information. It’s simple though. I’ll talk about that a bit later.
Let’s start with something closer to home. Consider if someone gave you new information and the rest of your day is now different.
For example, I tell you there’s a great sale at your favorite shop, and as a result you’ve altered your plans for today and instead are shopping right now.
The point is, some information changed where you’re going.
That’s the key concept – information changing the itinerary. Keep this simple idea in mind. It’s all there is to it, believe it or not.
To that end, imagine if every particle in the Universe changes its course based on the information it has.
Imagine seemingly empty space around us filled with information coming from all particles that make up Nature.
Imagine this information used by every single particle to change its motion.
What if this is the mechanism by which laws of physics come to be?
Other questions come to mind. For instance, how does a particle “get” information? How does it “process” information? How does it “project” information to space around it, so other particles “get” it? And what kind of information am I talking about? What would be the consequences for us?
These kinds of questions are the topic of this book. The answers are simple and don’t require more than a basic visual to understand.
More importantly, the answers can be translated into math. Amazingly, it’s identical to equations from physics textbooks, except for a few important predictions that contradict them. That’s the interesting part leading to FTL.
But I will establish these equations by following new ideas and not those of mainstream science. It’s an intriguing exploit I can do this at the very least.
Here’s where today’s science stands, in very broad terms: it explains reality as a collection of particles, and each of them carries information.
What I’m proposing simply goes a bit deeper: reality is still a collection of information-carrying particles, however each particle does information-processing itself. This makes the exact same math, except... in deep space the speed of light is not the limit.
For this particular case, the equations are different, and the top speed can be much higher.
No one (from Earth anyway) has tried this. So the jury is still out, regardless of anyone’s opinion. Keep in mind there are predictions made here that can be tested. So it’s possible to know now if the technology is viable, should we decide to try.
The FTL Hypothesis is simple. It uses only three-dimensional space, just the way we see it. There is no extra dimensions or bending of space. Not to say such concepts aren’t useful in some theories, but the less mind boggling the better.
Ever heard of Occam’s Razor? Basically it says the simplest solution is often the correct one. When it comes to space travel, maybe that’s true after all.
To begin with, the usual concepts of modern physics for are missing.
I need no mass, light, gravity, energy or force, and no principles of relativity or quantum mechanics. That by itself should raise your eyebrows, but it’s actually a good thing because it means less clutter.
If you want to throw around a neat phrase to sum this up, you’d say I start “before the first principles of physics”. That basically means ignore everything you think you know and start over.
Come to think about it, if there are aliens out there who can fly over here, and if they could show you their physics, maybe you’d have to do this anyway.
Remember a little tidbit I told you about the sale going on at your favorite shop?
The same way you changed your plans for today and went shopping after you got the information about the sale, everything else in Nature works the same way.
For example, an electron doesn’t change its trajectory unless it gets new information. It could be that another particle is nearby.
The idea is there’s a simple way for every particle to provide information about itself to every other particle.
If you enter a room full of people and shout your name, that’s you providing information about yourself. Thankfully tiny particles like electrons don’t shout, but the hypothesis is they too provide information about themselves to the world.
Does that sound okay to you? Maybe even a bit trivial? It’s not once you start getting into details.
What I propose is even elementary particles use information to act the way they do. If they don’t have any information to act on, they don’t change their motion.
It’s like if you have no reason to get out of bed, you won’t. You need some information to get the foot from under a blanket.
To give you a better view of how I think Nature works, I will start with some analogies that are easy to understand. I will use simulated reality to do that.
Don’t confuse this with the Simulation Hypothesis. It claims our Universe is a computer program and a simulation.
That’s not what I am saying, but I will use it as a stepping stone to explain a different idea.
This idea is that Universe itself is naturally computational. There may not be anyone out there running a program in which we’re just virtual puppets.
And it turns out, nothing is predetermined. As you will see, much of what happens is by chance and so there’s no way to exactly predict what happens next.
A word of caution. I said the Universe is likely to be naturally informational. While I think our Universe is not a simulated one, there will be suspiciously plenty of reasons to believe it might be, if you’re so inclined.
Since I can’t tell you what to believe, it’s a fair game. After you’re done reading this book, pick your simulation stance so to speak.
As I do this, I will deduce much of the physics as it’s known to us today. This is good as it means this hypothesis may be true.
To name a few, I’ll deduce the law of gravity, much of equations known today as Relativity, the basics of Quantum physics. I’ll just arrive there in a very alien manner, at least as far as a physicist of this era might tell you.
But first, I will contemplate a world that’s simulated by a computer. It’s a good starting point. It’s also easy to understand given the day and age we’re living in.
Imagine a world of simulated reality living inside your computer, being created with some really good software, the kind of software we will soon be able to build.
It’s like making a whole new Universe, such as in a 3D game. There are people in it, together with houses, cars, streets, trees and the sky. And yeah, they have smart phones too!
Imagine your computer is advanced enough, so this simulated reality is as good as the reality we live in. People in this reality are intelligent and self-aware, but they don't know they all live in your computer.
What is the difference between our world and the simulated world? For intents and purposes of living in the world, there is none.
People in this simulated reality actually believe they are real people. And to them, they are as real as “real” can be.
However… For intents and purposes of understanding the world, there is a difference, and it's a very important one.
Imagine if the people in simulated reality at some point learned they are not real, but are actually a simulation in a computer.
Doesn’t matter how that happened. Maybe you cheated and sent them all a text message announcing they live in your laptop.
Or maybe you told them the battery is about to run out and they better prepare for an abrupt end. Okay, maybe not that one.
Once they know it’s all a simulation, what could simulated people figure out? Surprisingly, they could use some very basic reasoning to understand their world.
For example, if they knew everything that happens is a result of computation on your laptop, they could use it to their advantage.
How could they? After all, they only know their world is the result of computation, but they don't know how it's done.
However, they would know every change that’s happening in their world has to be computed first before it happens. That’s very different from today’s physics, which says there are physical laws that rule everything.
A physical law is just a mathematical equation. For things to work according to this equation, it offers nothing in a way of internal workings. In physics as we know it today, Nature simply follows laws.
In what I propose, and in how things work in the simulated reality on your laptop, things must be computed before they happen.
But if everything has to be computed, then everything would take a bit of time. That means no change is instantaneous.
It also means something that requires more information to process will take longer to complete. This is important. I’ll call on this and it will be instrumental in showing why in deep space FTL may be possible. A simple thought-experiment can highlight this.
Imagine a few times a month you’re mining crypto currency on your laptop. Since this is very intensive, the simulation will now have to share the computing power. That means it will be slower. Everything simulated people do will now be slower.
Suppose you time how long sporting events take in the simulated Universe. A simulated basketball game used to take 60 minutes. Now it may take 100 minutes.
That’s 100 minutes of your time. For a simulated person watching the game, it’s still 60 minutes because their simulated clocks also tick slower. They can’t tell the difference because for them everything moves equally slower.
But imagine if you decided to keep your favorite simulated game play as fast as it did before. After all, you don’t have time to waste! So what will you do?
You might give more processing power to your proteges and take it from other parts of the simulation. So now you can still see the game in 60 minutes. Nice!
But, the downside is the processing power you added to speed up the game will have to be taken from somewhere else in your laptop. There’s only so much of it to go around.
For example, you would reduce computing power for a neighboring simulated town. So now in this “neglected” town, a similar basketball game might take 140 minutes.
Of course it’s not just the game. Everything, including people, dogs and cats will take longer to do whatever. From where you sit, time itself has slowed down for them.
You did all this so time in your favorite town would move fast enough for you to see the game. It means for some simulated people time would go faster, and for others it would go a lot slower.
Since they are neighbors, they could actually see this effect. The people from a “neglected” town would watch in amazement as everything in the other one moves like it’s on fast-forward.
And the “fast” folks would marvel at people, cars, tower clocks, even tree branches in the wind moving in slow-motion just one town over.
All this is quite possible in the simulated Universe. Time goes slower in one place, faster in another.
Clearly, this happens because you added more information. When you started mining crypto currency, there was much more data to crunch. So more information to process means simulated time goes slower.
And when you gave more processing power to one town, the other one had less. It’s all about how much information there’s to process, and how much processing power there is to deal with it.
Believe it or not, something analogous to this happens in the real world, and it’s called “time dilation”. Depending on circumstances, our own time goes slower or faster.
For instance, a clock on-board a flying jetliner will be slower than on runway. A clock far from Earth ticks faster than here. These are facts we already know to be true.
This isn’t saying we live in a simulated Universe. It isn’t saying we don’t live in one, either.
But in any case, as I will explain, there may be more to this analogy. Our Universe may work in ways that aren’t entirely dissimilar to the simulated one. Keep that in mind.
Let's step back.
The inhabitants of simulated reality can understand their world better, because their world is based on information - after all, it runs as a program on your computer.
Our reality may not be a program on someone's computer. So how does all this help us?
It can. Maybe our reality isn’t a program on someone’s computer, or maybe it is. There’s not that much difference in the end. The rules would be similar enough.
If our Universe is naturally based on computation, fine. If it’s running on someone’s laptop, guess what? The rules are likely to be similar, assuming this laptop is built simply and efficiently, as it likely would be.
Of course, if intelligent beings are simulating our Universe, then such a reality may not be stable. It may be at someone’s whim. The rules may change abruptly, and we may not be even aware of it.
And “we” might be just characters in a game, with our souls transposed from beyond the simulation.
Still, the rules we see right here and now seem to indicate our Universe is the “natural” one, even if we’re close to creating simulations ourselves. Why do I say this?
If our world is a simulation, it’s a pretty indifferent one. It doesn’t seem to budge from a solid disinterest in our affairs or even survival. It doesn’t take sides in our own divisions.
That doesn’t mean we’re not in a simulation. But it does give me an inkling we’re in a “natural” reality. Or, at the very least in a simulation that aspires to be just like it. I’d say it’s a distinction without a difference.
So for the sake of argument, let’s assume our Universe is indeed naturally computational.
If you’d like to think our Universe is simulated, i.e. that it runs on someone’s computer, go ahead. The story, at its core, won’t change much. That’s kind of nice it ultimately doesn’t matter.
Now I’ll talk about why it’s helpful to know our Universe is based on processing information.
As I go on, I’ll deduce lots of things from that simple assumption.
In the end, one of those things will be that in deep space the speed limit may be much higher than the speed of light.
My hypothesis is that Nature works by processing information. I said this information isn’t the ordinary kind. Reality has its own information, which it uses to function.
It would make sense we’re not privy to this information, kind of like simulated people in your laptop are not privy to the Operating System and the software that runs them.
I call it Nature’s internal information. It is possessed and shared by particles. By using it, they create reality we live in.
So can we tune in and read this information? That would be cool because maybe we can predict everything. You could get rich on the stock market!
But alas, no such luck. If we could read it, it wouldn’t be internal.
In the simulated reality analogy, simulated people can’t see inside your laptop. Nature’s internal information is the same way.
How does this compare to what we think today?
First and foremost, Nature’s internal information is different from information as we know it, which is observable information.
Think about the simulated world for an analogy.
The internal information is what's in the computers running it behind the scenes. This would be the Operating System and all the software and data. People in the simulated reality obviously cannot see any of it.
In contrast, the observable information would be what’s available to the folks in simulated reality – this is anything people in this reality can see, measure or experience.
This is a very good visual to understand the difference.
So, observable information is anything we can see and measure. That’s how Nature looks to us.
Internal is what Nature uses internally so it works as it does.
Most of the references to information in this book are to the internal kind – I am sure you got that already.
Here is where I’ll talk about how Nature works.
This is the gist of it.
Every particle in Nature works by processing information. This information comes from all particles. How does a particle “see” the information of other particles?
In physics as we know it today, this could happen by exchanging photons, for instance. I have something else in mind.
Imagine each particle has a field of information about it. This field exists around a particle which is denser closer to it, and thinner away from it.
The field is inseparable from a particle. When a particle moves, the field moves with it without delay. A particle and a field of information are one.
If you think about a particle and its information field as inseparable, then it makes sense they move in unison. The field isn’t caused by a particle, it is a part of it.
You can imagine this field as a bunch of “shells” around a particle, and these “shells” are transparent and go on into infinity. Each shell has information on it that describes the particle.
The very same information is present on each shell. Those closer to a particle will have a smaller surface, while those far from it will have a larger surface.
Since the information on every shell is the same, it means the density of it changes with distance. It is dense near a particle and sparse far from it.
So when particles fly through space, they cross each other’s shells and get the information about one another directly. There is no need to exchange anything, although exchanging photons is still just fine.
Of course, these “shells” are imaginary, so don’t get your nut cracker out yet. But it’s useful to keep them in mind.
Next to think about is the speed of processing information. This means how fast a particle reacts when information changes.
Clearly our reality is very smooth the way it works. You don’t see “Please wait, loading...” sign flashing anywhere in Nature. Okay, you wouldn’t see that, but let’s say you don’t see things freeze and skip. Things just work. It means Nature would process information very fast indeed.
But the speed of this processing can vary. When there’s lots of commotion, a particle will take longer to process what’s going on. Why is that?
First I’ll explain it and then give you analogies from everyday life.
Here goes. If a particle moves, it crosses a lot of “shells” of other particles. In doing so, it collects information from them. The faster a particle moves, the more shells it crosses and the more information it collects.
And the more information to process, the more sluggish it is.
So a particle in motion will react slower. Or if you prefer, time will slow down for it.
Now, a particle in motion isn’t the only situation where time slows down.
It also happens when particles get close to each other. The closer they are, the slower they react. Why is that?
It’s simple. I said there’s more information near a particle than away from it.
So when particles are closer together, there’s more information to process from each other.
And the more to process, the slower they react.
The bottom line is this: time slows down when particles move, or get closer to other particles.
This is predicted by both my hypothesis and mainstream science. Each of these effects has been confirmed by experiments.
All of what I just said boils down to a single recurring theme: more information means slower processing and slower functioning.
Remember the simulated world? The same thing happens there. Like I said before, this doesn’t mean our Universe is simulated, and it doesn’t mean it isn’t. Either way, when there’s more information, time itself slows down for a particle.
While the concepts here are simple, sometimes it helps to think of something a bit more relatable.
To that end, I’ll give you a few everyday analogies to fit the bill. They’re here as a visual aid, nothing more.
The first one is about a slow-down when in motion.
Imagine you're driving through a busy city with a lot of pedestrians milling back and forth. Your mind will be slower to react when driving, compared to when you stop the car. Why?
This is because there are many more details to process about the surroundings when the car moves. The faster the car, the faster things change, the more details to process, and the slower your reaction is.
Well, a particle that moves fast is in a similar bind. Never mind the gigantic gap between you and a tiny particle like an electron. The fundamental reason for slowing down is the same.
It has nothing to do with how exactly your mind works or how exactly electrons navigate. It has to do with both using information to change motion. That’s the key.
Put differently, the above example is a simple statement that your mind is an information system. It has limited capacity. When there's more information to process, it simply has to react slower.
When you're in motion, your reflexes become sluggish because there's more to take in. It's as if time itself slows down.
In reality though, it is only the speed of processing information that has declined. Is this effect limited to human drivers? Of course not.
The exact same thing would happen if a robot drives the car, as in a driver-less car. Too much information will overwhelm the AI (“Artificial Intelligence”) too.
The same thing happens to a moving electron. The faster it moves, the more information becomes available to it, and its own “reflexes” will slow down.
It doesn’t matter how electron processes information. It surely doesn’t have a brain like you. It surely isn’t like any computer we know.
But electron is an information system, just like your brain. It’s just a tiny information system, but that doesn’t matter either. Regardless of it size and regardless of how it works, every information system will react slower when overloaded.
As I said, a particle, a person or an AI aren’t the same thing at all. But the concept of limited processing capacity is the same. It’s fundamental.
So an electron in motion would do everything slower. The same would happen to any particle, not just an electron.
Anything that’s made of these particles would also operate at a lower pace. It would evolve slower. Effectively, time would slow down for it.
The effect of time slowing down is real. It’s been known for about a century now.
The explanation for it, however, is very different in physics as we know it today. Current science accounts for this effect in a way that isn’t easy.
In mainstream physics it’s called “time dilation”. That term describes it well, since everything a particle does gets prolonged (or “dilated”), and hence time does slow down for it.
In physics textbooks you’ll find this is explained from a principle of relativity. It has to do with the speed of light assumed to be constant. That assumption isn’t needed here.
In the FTL Hypothesis’ view, to explain this is simple: there’s too much information to process and the processing takes longer.
You could say an electron does everything slower, or you could also say time slows down for it. It’s more of a nomenclature issue. Call it however you want.
Now, consider if everything in Nature works by processing information, including our electron and all other elementary particles. Since you’re made of particles, it means if you move, time slows down for you too.
It’s an important conclusion. This is also a point that leads to ability to travel faster than light. Next, I will illustrate why it’s so.
First, think again about driving in the city. You know there’s a speed limit. Maybe it’s 35 miles/hour, maybe it’s 45. There’s a reason for that. If you drove passed the limit, you wouldn’t be able to control the vehicle safely.
I already explained it’s because there would be too much information for you to absorb too fast. You have a limited capacity to process it, so you couldn’t do it as quickly.
If a pedestrian jumped in front of the car, you couldn’t break fast enough, even if your car could stop on a dime.
Now think about a particle that moves faster and faster. As there’s more and more information to process, its “response time” will, too, become longer and longer.
At some point, this “response time” will be so long, a particle won’t really do anything any more. And it won’t accelerate either. Why?
Recall a particle changes speed in one way only: by processing information. So, if that’s not happening, the speed can’t increase any more either.
In other words, a particle will hit a speed limit.
The speed limit happens when an object moves so fast, it collects so much information to grind its processing to a halt.
Now, consider what I just explained. I deduced the existence of a speed limit. And did so in a very simple way, at least compared to how today’s physics explains it.
Here’s a question: is there a particle that always moves at the speed limit?
Yes. It’s called a photon. Or as we usually call it, light. Its “programming” says to always move the fastest possible.
Having a fixed speed limit is how the story ends in current science. However, as you already know, in my hypothesis that’s not the end of it. More specifically, the speed limit isn’t the same for all situations.
So how do I get to a possibility of different speed limits? And especially, higher ones in some places? It’s all about how much information there is to begin with. Let’s go back to the driving analogy to visualize it.
Instead of driving in a busy city, imagine you’re driving on a straight road with a few cars in the middle of the desert, so there’s little to distract you. If you’re thinking you can move a lot faster, you’re right.
That’s why the speed limit through Arizona desert is 75 miles/hour, more than elsewhere. The reason why it’s okay to have a higher speed limit is because there’s less information to contend with.
In light of this analogy, think about how Nature might work, and especially think of Earth and the Sun.
They are huge sources of information. However, as I said, it declines with distance.
So if an object moves far away from them, it will, too, have less information to process. That means it can accelerate to higher velocities before its processing stalls.
Just like you driving out in the desert.
In other words, for an object far from large mass, the speed limit is higher! There is less to contend with far away.
There’s more to FTL than just being far from large bodies like Earth or the Sun. It has to do with your mass.
A massive object can have a higher speed limit. I will give you another analogy to understand this.
Imagine if AI (Artificial Intelligence) is driving the car.
The question is, how does the amount of computer memory affect AI’s reaction time, or its “reflexes” as it were?
We all know intuitively the more memory there is, the faster it can go. Why is that?
At higher speeds, there is more information to contend with. That means more memory is required.
What if you add more memory? That means AI will be able to process the additional information and thus move faster.
Basically the speed limit is higher if there’s more memory. That’s the key point.
Now let’s step back from the AI driving analogy and back to how the Universe might work.
In case of a physical object in our reality, it’s similar.
If a particle can hold more information, it can process more. As it collects more when it accelerates, it won’t be as “overwhelmed”. Hence, it’s speed limit is higher.
And what kind of object am I talking about here? What kind would have more information storage than a single particle?
Obviously, an object comprised of a large number of particles, meaning more massive.
A greater mass will have a higher speed limit.
However, this is noticeable only with rather big objects far from planets and the stars. So a large starship in deep space would do. But a small probe may not. Why?
Think of the size of Earth or the Sun. Consider their effect on a small object, like a probe, and a large one like a starship.
Now, remember I said there’s less information far from its source.
I also said it’s spread out on “shells”, with shells obviously being nothing but spheres. And if information is spread out like that, it means it declines with the square of distance.
Thus it diminishes rapidly in deep space, far from Earth or the Sun. At that distance, a large starship will be huge compared to this declining information.
But a small probe may still be tiny in comparison. So you can understand why a starship will have a much higher speed limit. As it moves faster, it can collect more information without being “overwhelmed” by it.
You get the idea why the speed limit would vary depending on where you are and what’s your mass. Current science says it doesn’t vary at all.
And since the varying of speed limit may happen so far away, there was never an experiment done to look for it.
That’s because until now, no one even suspected things might be different in deep space.
So you can see why current theories may not hold the whole truth, even if they work perfectly for us where we are now.
Let me quickly sum up the difference between the end result of my hypothesis and current science.
In today’s physics, if you try to accelerate to the speed of light, you’ll essentially stall. Everything you do will take longer. The clocks on-board will tick ever slower.
In the end, you won’t be able to reach this speed because … you’ll experience what’s known as “time dilation”. What I mean is time itself would practically stop for you. There are other effects, but this one is important and I talk about it the most.
In my hypothesis, virtually the same thing happens. But not exactly the same.
The difference is not apparent where we live, meaning on Earth. Or nearby within the Solar system. But out in deep space, there’s a difference.
As you accelerate, you will still hit the speed limit and experience time dilation. But… the speed at which this happens may be higher, and it may be much, much higher than the speed of light.
So doing another experiment close to here is a waste of money. You have to go a bit further out to see the difference. Also, the dependency on mass is on a large scale too. It isn’t about trying the accelerate 1000 atoms instead of a single one. You need to go to thousands and millions of TONS to see the difference.
Given no one has ever tried this, don’t let anyone tell you it’s impossible.
Where we live, our home planet is the dominant information source.
I explained the speed limit here is when moving so fast relative to Earth, too much information is collected and the processing stalls.
Because this information originates in Earth, the speed limit is always the same relative to Earth.
And what moves at the speed limit? Light.
So the speed of light on Earth is always the same relative to it.
Let me give you a simple example.
Imagine jogging with a flashlight. Obviously you’re moving relative to Earth. The flashlight shines forward. What’s the speed of this light?
The speed of light will be the same relative to Earth, regardless of whether you jog or stand still.
That’s because a flashlight is a tiny information source compared to the planet itself.
Speed limit is always relative to the dominant source of information, which is Earth in this case.
However, if you put on a jet pack and fly far into deep space, this will no longer be true.
Now, Earth is far away and no longer a dominant information source. What is the dominant source now?
So in this case, the speed of light will be the same first relative to you, and then to other massive bodies nearby, depending where it goes next.
Let me sum it up in a few words.
Basically the speed of light is the same near Earth, while far away that’s no longer true.
If we lived on Mars, it would be the same relative to Mars. Or Venus. Or Saturn. You get the point.
The local large mass is your anchor for speed limit.
Now this is important. What I just explained is a simple and direct consequence of the FTL Hypothesis. But it was a matter of great controversy long ago.
It was about the very same thing: the speed of light being the same whether the flashlight moves or not. As measured on Earth, of course.
This fact was confirmed in the late 19th century and people were dumbfounded by it. At that time, they thought the speed of light would vary with the movement of flashlight. In my example, the physicists thought it will be different when jogging versus standing still.
Now, the experiment they did was different simply because they needed higher speeds. But suffice it to say, it boiled down to the same thing.
Once they tried this, however, they discovered the speed of light didn’t budge! It was the same whether the flashlight was moving or not!
They were shocked. They so didn’t expect this. And then, the scientists made a huge assumption. They said it wouldn’t matter if the flashlight was somewhere else. For instance, strapped on a jet pack you used to fly out into deep space.
The scientists proclaimed the speed of light would still be the same relative to Earth, even far out like that. It is from this presumption we believe today nothing can move faster than light, no matter what.
That may be wrong. And all the supporting evidence may have been misinterpreted. That’s next.
Based on what I told you, It seems quite possible the assumptions scientists made about the speed of light, while certainly true near Earth, do not hold everywhere.
How, then, does the speed of light change? It’s simple.
Passing through deep space, the speed of light relative to us may actually vary, depending on what it passes by. It may be higher than 300,000km/s or lower than that. Just like in the example with a flashlight.
That’s quite the opposite of always being 300,000km/s as it is in current physics.
So is there a way to figure it out, one way or the other?
Yes, the experiment I propose would make it clear what’s true and what’s not. I’ll talk about it soon.
In the meantime, observations of celestial bodies are often quoted as proof of mainstream theories. They are used to justify the assumption made in the early 1900’s, which is still the official science.
However, the same observations validate my hypothesis as well. They prove both points of view! How’s that possible?
It is, and it’s not uncommon either.
Here’s one important example. It’s about the speed of light coming from distant stars. Typically stars that rotate around each other are looked at. These stars move towards us and then away from us, then towards us, then away, and so on forever.
The observations show the light from these stars reaches us at the same speed, whether they move away or towards us. So, then the scientists were happy to say the speed of light is constant, even if it originates far, far away.
Not so fast. There is a much simpler explanation. One that doesn’t require any assumptions about light.
In my hypothesis, the speed of light varies the moment it leaves the stars, but very soon afterwards it becomes the same. Why?
Fairly quickly after the light leaves these stars, it’s the Galaxy that’s the dominant mass, because now the original stars are too far away.
So this light now moves at 300,000 km/s relative to the Galaxy.
When it comes to us, it will have traveled in excess of 99.999999999% at that same speed regardless of whether it started from a star moving away or towards us.
Hence, it would appear to have moved at the same speed.
And that’s how simple the explanation is. So then, this clearly validates the FTL Hypothesis.
But wait, isn’t this very same experiment also considered a proof for the mainstream theory? Yes.
It substantiates both points of view! I’m saying things aren’t black and white.
I am saying the unnecessary assumption made by scientists from the early 20th century may have been unfortunate. It is something we need to give a really hard look today.
As a result, mainstream science believes there’s nothing special to see if a large ship accelerates in deep space. In the FTL Hypothesis that situation is exactly where our next evolutionary leg up is. It’s about our future as a species.
The distinction is so important, it’s beyond words. And it is one singular case where we have zero observational data. In other words, we never tried it.
It’s like the keys to your car are always precisely where you haven’t looked. Except these may be the keys to our very survival.
I already talked about why a large object in deep space might be able to exceed the speed of light.
In short, such an object doesn’t get “overwhelmed” while processing as much information, and its speed limit is greater.
Here’s how would that play out in practicality.
To reach higher speeds, a starship should be far outside Earth and as far away from the Sun as possible.
The speed limit near bodies like these is the speed of light and nothing more. When I say far, I mean reasonably away from massive objects of our Solar system. The equations you’ll see later can be used to determine this distance, but suffice it to say it is achievable today.
Next, a starship must be massive. The higher its mass, the faster it can go.
A small probe will not be able to accelerate to a very high speed. As it turns out, massive objects might have a higher speed limit.
Think for example of galaxies. A galaxy could move much faster than light because it’s so massive. And there’s some truth to this, as I’ll show you later.
The problem of particles, rocks and other space junk hitting the craft is significant. Because of this, space-faring civilizations would likely have “beaten” paths through space that are generally kept clear of large debris.
For this reason, if there are aliens out there flying between the stars, they are probably sticking to these safe routes. If they don’t have a very good reason to go somewhere, they won’t, because it would be very expensive to create and maintain such safe routes.
In addition, even if these “safe routes” were very close to us, the hypothetical aliens wouldn’t come visit. Why is that?
Well, think of flying over the Amazon jungle. You’re very close to it, but can you just hop down, visit, and go back up?
In the same way, no one passing close by Earth will do that either.
Does this explain why there are no aliens here? But wait, aren’t we worth the trip? Maybe, in our own opinion. In the eyes of hypothetical aliens, the mileage on this opinion may vary.
You might think inability to push particles past the speed of light here on Earth means big ships in deep space can do even less.
But it may be exactly the opposite. In the end, this can be settled pretty easily by trying it, so it’s not about believing or not believing.
Of course, accelerating large mass would take much more to do it. That’s a given.
What I’m talking about here is the speed limit, not how hard it is to reach it.
Finally, what about communication over interstellar distances?
A message would be recorded and sent in a spacecraft faster then light. It would be encapsulated during the entire flight and inaccessible to anyone other than the intended recipient.
Such a method is currently undetectable by us. It’s as if someone sent a messenger pigeon to you. Chances are you’d never even know to look for it, because who’s looking for messenger pigeons these days?
What we’re doing today is sending broadcast transmissions into space. Those may fall on deaf ears because no one is expecting any.
And when it comes to receiving alien messages, we are building ever bigger telescopes, which may not help if they are not sending any broadcasts.
Our current methods of communication may have been abandoned so long ago by the hypothetical aliens, they wouldn’t even know to look for our attempts.
Would that explain why we never hear from them, if they actually exist? Or why they don’t reply to our messages?
If so, the Universe would look very lonely to us, even if it were teeming with intelligent life. Now that would be the case of galactic miscommunication, and the joke would be on us.
This joke may be that we’re too impressed with the speed of light.
It may be the speed of light in deep space is the “snail mail” of interstellar communication, and the humankind is one of the few out there who still hasn’t figured it out.
Gravity generator would be a device to counter the existing natural gravity. But it could also generate gravity where none existed. It would be like falling towards a planet, except there’s nothing there.
We may be in luck when it comes to generating gravity, much like we can generate electricity. Obvious use for it would be to provide one of the most important things every Earthling needs to survive in space.
But there are other uses, such as helping with clearing space debris out of the way. And of course, the propulsion.
The propulsion for interstellar ships would likely be “pull” based, rather than “push”. Today, it’s based on the push of burning fuel behind you. That’s rockets by another name.
So how to make gravity? Why would this be possible in the first place?
But first, what causes it? To answer that, let me start with a question. If everything happens because particles process information, what kind of resources are being spent to keep it going?
A fundamental premise is that nothing is unlimited. I assume processing information must use some resources, which aren’t infinite. So when particles compute, they deplete those resources. It’s like a simulation being given a certain number of processing cycles on a CPU, and it just stops when that number is reached.
In light of that, Nature wouldn’t waste the computational resources. All other things being equal, they would be preserved as much as possible. Here’s what that means.
I already said information declines away from large objects. It gradually gets sparser with distance. Keyword here being “gradually”.
Since there’s more information near a large object, a particle in the vicinity would compute slower.
That means using less resources, which are thus preserved and can last longer.
To keep preserving them, a particle would “want” to move closer to a large body.
That’s gravity. It’s about minimizing reality’s computation. The equations I’ll show later match exactly the law of gravity.
However, think about this for a moment. If this is an effect of moving towards a place of more information, then an object in motion does the same, at least conceptually.
Imagine if a particle moves passed an object.
A particle would collect more information, which is what we want. Of course, this effect is limited just to a pass-by.
Obviously, the same is true in reverse, when an object is moving past a particle.
But if an object rotates, or vibrates for instance, then this effect stays in place.
If an object does this fast enough and is shaped properly, a particle can be “fooled” into moving towards it as if it’s a large mass.
As such, gravity can be made. By that I mean real gravity. I don’t mean spinning donuts in space.
So in principle, gravity can be generated by causing relatively small amount of mass to move really speedy in a confined area. One trivial way is to rotate a heavy object or to vibrate it.
This way the motion doesn’t just pass by, it stays in place. It persists.
But there’s more to it. Think about what I said how Nature makes gravity, like around Earth for instance.
It’s like a slope in the amount of information. There’s more near Earth, then less a bit further, then a little less again, and so on. It gradually declines away.
It’s this rate of getting sparser that has to be done right.
This might be achieved by making the machinery in a conical shape.
Such shape makes for the gradual decrease of information, which is essential.
The mass of the gravity generator doesn’t have to be great, but it gets easier the higher it is.
There’s lots of room to be creative here.
The important thing though, is the arrangement in space, i.e. the geometry of the machine. As I said, the goal is to achieve a gradual change of information in a desired direction.
Some arrangements would work, others wouldn’t at all, no matter how fast they spin or vibrate.
The end result is that a ship would be “pulled” by the gravity it creates in front of it. It would be literally falling towards the non-existing large mass, and thus constantly accelerating.
This has the advantage of extremely high acceleration without the occupants feeling anything. Instead of their bones being crushed by say a 100G, they could hold a glass of wine while it takes place.
That’s because gravity affects everything equally. Even though you might be falling at a tremendous acceleration, the G-force on you is typically zero.
As I said, a similar approach might be used in creating Earth-like gravity in space without spinning ships, which are notoriously difficult to engineer. It could also be used to clear the debris out of the way while moving so fast.
Surprisingly, there is Faster Than Light motion we know about. Although according to current science there isn’t. Confused?
Okay here is the fact: some galaxies are moving faster than the speed of light. They are. It’s not a secret, it’s a fact accepted by mainstream physics.
So why isn’t this the proof Relativity is wrong?
Well, the explanation offered is that space itself is expanding. Consider it for a moment. This is NOT the same as the galaxies moving away fast. Yes, they are moving away fast, but the idea is that space between us and those galaxies is expanding.
In other words, physicists believe those galaxies are actually moving slower than the speed of light, but the space itself may be expanding faster than light.
If you are having trouble with such an explanation, I understand.
Can you truthfully comprehend this? If you do, good for you because I don’t! Can you possibly imagine that? Things can expand in space, but space itself expanding?
What’s the space expanding in? In itself? In some other kind of space we can’t see? Or is it that new space is being created between us? Maybe a bit of each? It’s hard to explain. It really is.
The math works out, mind you. But the math can work out for many an incredulous idea, yet it doesn’t mean any of it’s real.
Trying to explain the idea of expanding space is like this: you will feel awe not because you really understood it, but because you really didn’t. So I will leave it at that.
I think though it gets uneasy when scientists have to resort to these kinds of notions. Meanwhile, galaxies are really moving faster than light, regardless of the explanation.
Can this idea of “expanding space” be verified? The explanation offered is conveniently something we can’t directly verify.
I say galaxies are simply just moving faster than light. Yes, that would mean Relativity isn’t right.
So what if it isn’t? It wouldn’t be the first and won’t be the last.
One of the first things I deduced in the FTL Hypothesis was that massive objects can exceed the speed of light, and the more massive, the better.
Is there anything more massive than galaxies? And it turns out, some of them do move faster than light! Isn’t that an indication my hypothesis is right?
There are ways to verify my hypothesis without having to build huge ships. In fact, the testing could be quite cheap in comparison. There is no need to try and break the light barrier in deep space.
This is where time dilation I mentioned earlier comes to play. There is a very significant difference between the mainstream theory (Relativity) and my hypothesis that’s easy to check.
It’s about how clocks behave in deep space when moving away from Earth and the Sun. And checking clocks is simple and wouldn’t cost that much to do.
One way is to send a heavy probe at high speed into deep space with one important caveat: this probe should avoid any other large mass, such as planets.
This is the opposite of what our probes do today – they tend to be light and go straight to large masses to take photographs and use gravity to sling-shoot them.
While that’s cheaper and a great photo-op, it’s also not going to give anybody any ideas.
The experiment I propose is to send a heavy probe very fast into deep space and avoid all other planets as much as possible. A probe needs to attain speeds high enough to clearly show the slowing of clocks.
When I talk about “slowing of clocks” I mean due to speed. The other kind, due to mass, would be accounted for. If you’re not sure what that means, don’t worry, it doesn’t change anything.
If mainstream physics is right, the clock on this probe will slow down and remain that way as it goes further away us.
If I am right, at some distance, the clock’s speed will start to tick faster and eventually revert close back to normal. By “normal” I mean as on Earth.
This is easy and relatively cheap to do right now. I suspect the latter will happen and we’ll be on our way to interstellar exploration.
Think back to “shells” around particles – yes they’re imaginary but useful. These shells go around a particle to infinity.
Each shell has the same information on it, describing a particle.
Imagine there’s a shell around an electron that has written “I am an electron” somewhere on it.
So as you pass through this shell, you pick up the writing, and voila! You know it’s from an electron.
But, let’s say you come from a different angle and you miss the writing. What would you do to increase the chances of getting to it?
There is only one thing you can do. The location of “I am an electron”-writing must change randomly all around the shell at a pretty good rate.
That way everyone from every direction can reasonably get to it.
For example, you can scroll the writing all around the shell really fast, kind of like stock tickers move.
Just like if you’re passing by a stock ticker, you can get the particular stock price as you walk by. Tickers changing constantly is what enables you to find out what you want.
In the same way, information around a particle like an electron must shift pretty fast. Otherwise other particles might not notice it.
For instance, two electrons might pass each other and never change course, as they normally would be repelled.
Actually they sometimes do that, and in my hypothesis it happens because the “writing” on their “shells” shifts randomly. Because of that, it could happen this “writing” just wasn’t “seen” by the other electron as it passed by.
Unlikely, yes, but not impossible.
That means getting the right information is sometimes unpredictable and may just fail.
This is important. We know Nature favors probabilities and not certainties. With this hypothesis I can explain why.
Needless to say, but of course, there is no actual writing on the “shells”.
Instead, perhaps you could think of information as “scattered” around the particle. It’s like a cloud of dust. There are specks of “dust” all around and they just shift randomly, the way dust is in ordinary life.
That’s what makes it virtually impossible to get through a cloud of dust and not get dusty! Except the “dust” around a particle is really the information about it.
So, coming back to our “shells”, you can imagine dots of information scattered on them.
There could be only a few of these dots. Say there’s only 10, and if you could see them, they might look like 10 tiny specks of dust.
That’s if they are stationary. But, if they are shifting around on a “shell” really fast, they would appear more like a cloud of dust then 10 separate specks.
That’s the idea.
Anything passing through this “cloud of information” will have a better chance of being hit with it. Just like you’d be if you ran through the actual dust cloud – you’d get dusty, wouldn’t you?
That’s the way two particles passing near one another would get in touch with the information describing the other one.
They would too get “dusty” so to speak. That’s the idea and that’s the most efficient way to spread a small amount of information over some space. Just like dust!
The way a particle would move is interesting. If my hypothesis is right, then a particle can never truly rest. Even when it seems absolutely still, it has to bounce around.
What I mean is, if we say a particle is right here, it’s actually more like a “blur" around that location.
It’s as if a particle “dances” around its position. And its position is just a place where it spends the most time. Why would a particle do this?
It has to, because it’s probing the space around it for information. Without this probing, reality wouldn’t work. Why?
To start, let me state the obvious: a particle is certainly interested in information at its location. But what else?
Well, it’s not enough just to know what’s there. It needs to know where the information comes from.
This means direction and distance. That way a particle knows what speed to move at, and whether towards the information source or away from it. How can this be done?
First, consider the “viewpoint” of a particle.
It can only know the information at the exact spot where it is.
It doesn’t have eyes to see in the distance.
It knows nothing about anything in the Universe except what’s in the tiny spot it occupies.
If you think about this, there’s only one way to probe information around it: a particle randomly chooses a direction in space and “jumps” forward and backward. What does this accomplish?
A lot, actually. For any given direction, it can now collect information from two points in space. And that says which way there is more information and which way there’s less.
Based on this, a particle can “figure out” in general where to go. If there’s more information in one spot than the other, then this spot is closer. That’s easy.
But that’s not entirely precise – after all the direction chosen isn’t related to anything, it’s just random. To increase precision, a particle would choose another direction and again “jump” forward and backward.
So now a particle would basically “feel out” different directions. These directions would be random and as such cover all the angles around a particle over time.
In the end, a particle changes motion based on how much more information there is in one spot versus the other.
And in order to “cover” as many directions as possible, this process repeats constantly.
It also has to happen very fast, so it doesn’t take too long to reasonably examine the space around particle, all 360 degrees of it.
How does a particle “dance around” its location?
It instantaneously changes its position each time it goes back and forth. Why would this be?
If a particle had to actually travel as it bounces around, even at the speed limit, the Universe wouldn’t exist. How’s that?
Simply put, it would take too long to collect information around the particle, and to process it.
In other words, the only way for a particle to quickly “feel out” the space around it is to jump instantly when it goes back and forth.
This “jumping” isn’t motion as we think of it. It’s just a way to collect information from the space around.
It’s an internal way to function, nothing more.
Basically a particle “looks” at a surrounding area by bouncing all over it. How far would it “look” in each direction?
The further away from its base position, the less likely is to be looked at.
With each “dance” back and forth, a particle computes its change of motion. So the change in motion happens constantly and fast, almost like the way a butterfly changes course.
If you think about it, this simple model allows a particle to react to three-dimensional information in a fairly accurate way. Clearly, it’s not completely accurate, because a particle doesn’t know where others are. That’s why it’s “feeling out” the space around it.
So, if it’s lucky, it may choose the exact directions to other particles nearby. In that case, it will be the most accurate.
On the other hand, if it goes perpendicularly, that’ll be the least accurate. In reality, a particle will score somewhere in the middle over time. So the actual change of motion at any given point cannot be predicted.
Now, if you step back and think about what this looks like, you’ll see a particle will look like a blurry spread-out “thing”. It kind of stays in its “position” but it spends a lot of time “probing” the information in the space around it.
It probes more the space near it, and less so the space far away. That’s kind of apparent because it’s the information near and around it that matters the most.
In effect, a particle could be literally anywhere in the Universe, but it will most likely be where we think it is.
If you know anything about Quantum Physics, does this sound familiar? It should, this is what mainstream science thinks too, however it has no story to explain this behavior.
As you see, it’s not like that in the FTL Hypothesis - it has to be this way for the Universe to work.
So now, if you think about it, you’ll have a clear visual.
Imagine a particle that moves forward. It bounces back and forth in random directions as it goes.
It’s like a moving spread-out thing. What does that look like if you see it from the perspective of an obstacle in front of it? A wave.
Did you know current science thinks everything is both a particle and a wave? Except current science cannot explain why it would be so.
The roots of quantum behavior are easily explained in the FTL Hypothesis.
Perhaps that tells you which story is closer to the truth. If you consider this, is it a stretch to say reality is truly and ultimately informational in nature?
Think again about “jittery” motion of a particle. It “jumps” back and forth in various directions. The goal is to compare the information from all around it. This way, a particle can “figure out” the direction and distance to other particles, and ultimately how to move.
It’s like being lost in a forest without any help. You need to scout in all directions to get a clue where to go.
A particle does that by jumping back and forth in random directions. It collects information at the end points of this movement. So by comparing what’s at the “tips” of back-and-forths, it can determine in which direction other particles are.
The point is, this process is all about collecting information from two consecutive moments in time. And then processing it. That’s it.
By doing this every single moment, a particle has a way to react to the Universe around it.
To decide its next move, a particle would combine the information it “sees” now with what it “saw” just a moment ago.
A particle doesn’t have eyes to see, rather what I mean by “seeing” is simply collecting information from “shells” of other particles.
If a particle combines what it “sees” now with what it “saw” a moment ago, it means it must have “memory” to store them both, so they can be used together.
In other words, just like your computer running the simulated world, the Universe has basic memory to run the way it does. This memory is the simplest possible, as it “remembers” only a single past moment.
This conclusion is important, and I will use it shortly.
While knowing how exactly Nature processes information isn’t in the cards, there are still some things that can be safely assumed. One of them is that information will be used efficiently.
To illustrate that, I’ll present a simple scenario and do some basic math. Don’t worry, it’s so straightforward you probably won’t even think of it as “math”. Even so, this is exactly what will pop up in the formal hypothesis. You’ll get a feel just how simple the whole thing is.
Imagine playing a game of puzzle. The pieces to the puzzle are hidden. You have two sets of data about their location.
The first set is about the city where the pieces are. They are in San Francisco, Phoenix and Los Angeles. The second set gives more details about the exact location, so the pieces can be found in Greek restaurants and Chinese pastry shops.
What’s your plan to find all the pieces to the puzzle? It’s a piece of cake (or maybe many pieces of cake). You just have to go these places: Greek restaurants in San Francisco, Phoenix and Los Angeles and also to Chinese pastry shops in San Francisco, Phoenix and Los Angeles.
That’s kind of obvious. To cover all the grounds you have to combine the two sets of data. You have 3 cities (SF, Phoenix and LA) and 2 kinds of eateries (Greek and Chinese pastry).
So you have a total of 3 x 2 = 6 pieces of information to follow up on. Obviously the number of actual locations will be much greater, but at least you can have fun while you do it.
I am getting to this: if you have two sets of data, and you need to use them, what do you do? I mean, in order to be efficient. Well, clearly you must combine them so all combinations are used. If you had 10 cities and 6 kind of locations, that would be 10 x 6 = 60 pieces of information.
The point is, the end result is always a multiplication of data set sizes.
That’s the only way for no data to be left out, and still keep the work at a minimum. A model of efficiency.
For example if you missed “Chinese pastry shops in Los Angeles”, you wouldn’t have some pieces of the puzzle. Game over, and you lost! Only by fully combining the two sets of data you have used all the possible information there is.
This is elementary logic. It’s a true statement that needs no proof or explanation. Scientists would call it a truism or an axiom.
If Nature processes information, this is how it would be done. If there are two sets of data to be processed, then every bit of information from one set would be combined (or paired) with every bit from another set.
I talked about particles mashing together information from right now and from the moment ago. These two data sets would be combined (or paired) the same way as in my puzzle game.
As I mentioned, this is the basis for the math I’ll do, and for the equations that lead to the possibility of FTL. Even if you plan to skip the formal hypothesis, this will help understand how exactly particles slow down from too much information.
That’s coming up.
Everything that happens is ultimately just motion of particles. In any physics, Earthly, alien or otherwise, that’s the very first thing to consider.
When particles move, there’s more information for them to “see”. I covered that already. But considering where I am going with this now, I think it will help to have another way to imagine it.
It’s like when you jog in the rain. If you run (as opposed to standing still), you’ll soak faster because more rain drops will hit you.
The same way, when particles move, they will collect more information as they pass through more “shells” of other particles. A particle collects more information that way just as you collect more rain drops as you jog.
But as you run in the rain, at some point, your clothing will soak up so much, it won’t be able to absorb any more water! No matter how many rain drops hit you, they will just drip off.
Something similar happens to a particle.
As it moves faster, it takes in more information. At some point, there’s a limit on the amount collected. A particle can’t have unlimited capacity to store information. Whatever storage it has can only take so much.
What happens when the capacity is reached?
First, recall a particle has memory of the simplest kind. It collects information every single moment, and “remembers” it for just one more.
So it has current information from right now, and previous information from a moment ago. Combining of the two is what causes the change in motion.
Back to the original question. As it collects more information, a particle has only one choice to take in more.
It has to increase the storage for the information it takes in now, and the storage for what was taken in the past will shrink. The point is, the total sum of the two remains the same.
To visualize this, imagine you have two balloons. Both are blown to equal size. Suppose the rule is the total amount of air in both has to remain constant.
So if you need to blow more air into one of them, what will you do to the other? You’ll let the equal amount of air out!
Clearly, the two balloons are analogous to the particle’s information storage. One is for the current moment and the other for the moment before.
If you need to put more information into one, you’ll let some go out of the other. The information “ditched” is simply lost.
It means the storage for the present information would expand, and for the past would decline. The overall combined storage stays the same.
What is the consequence of this?
It’s that particles will work slower. They will process less information. In short, time will slow down for them. Why?
I’ll show exactly why this is. There’s an easy way to quantify it, so you’ll get a good feel for it. It’s so simple it’s hardly worth calling “math”.
Suppose a particle has 100 memory units, whatever they may be. Normally, the idea is to use half for the present information, and half for the moment ago. So basically 50/50. The two would be used together to determine what particle does.
Like in the example with pastry shops, the two data sets would be paired, or combined. So the number of those pairings would be 50 x 50 = 2500.
But suppose a particle is moving, or is close to a large object.
Now there’s more information collected, so it may be 60 memory units for the current information and 40 for the past. The total storage used is the same : 50 + 50 = 60 + 40.
The number of pairings in this case, however, would be 60 x 40 = 2400. That’s less than the original 2500.
You can see how the number of pairings is now lower as more information is taken in. Basically, a particle computes less. This proportion grows as there’s more and more coming in.
So if a particle moves really fast, it may be 90/10 for example. Now, the number of pairings is 90 x 10 = 900, a lot less than the 2500 we started from.
If I were to follow this line of thinking all the way, an interesting concept arises.
Think of a particle that moves even faster. So fast the storage for past information would shrink near zero. Meaning such particle only has what it collects right now.
So the proportion of present and past information would be practically 100/0, meaning the number of pairings is 100 x 0 = 0. What does it mean?
The computation of this particle is virtually zero. It can’t accelerate any more because it doesn’t compute any more.
It’s moving at the speed limit.
It also means it reacts to the world around it very little, if at all. It can move with the minimum interference. It’s a perfect carrier of observable information. Such a particle would be extremely useful. Sounds familiar?
There is something like this in Nature, and is called a photon. Or as is commonly known, light.
Photons are the very basis for us to see the world for what it is. If I am right, they exist because of limits in information processing.
There’s one thing to clarify, though.
I said the computation of a photon is virtually zero. It means it’s very low. But it’s not zero.
If a particle has the ability to compute, then it will compute, even if it’s very slow. Why?
There are certain things that always have to be computed. They aren’t related to particles processing information from other particles. What are those things?
They are related to the very basic premise of limited resources.
For example, gravity is a way to preserve those resources and keep their usage low. Basically, a particle must still compute in order to keep that same computation to a minimum.
I talked about a particle computing less as it collects more and more information.
That’s because resources available to process it are limited. I gave you an example showing how the number of pairings declines.
What does that mean? What conclusions can I draw? That’s the topic here.
I explained earlier a particle combines two sets of information, one from now and one from just a moment ago.
It does that by pairing the data from both sets. So if each set has 50 pieces in it, that’s 50 x 50 = 2500 pairings, just like in the example I gave you.
You may ask if these pairings happen in sequence, or in parallel? In the above case, do 2500 pairings happen one after the other, or all at once?
If Nature were to be efficient, they would happen in parallel. There is no reason for them to happen in sequence. No pairing depends on any other, so there’s no point in waiting for one to process before moving on to the next.
It’s another way of saying no matter how much information a particle processes, it takes exactly the same time. 900 pairings to compute will take the same as 2,500. So Nature is awesome in its ability to compute in parallel.
But as I said, a number of pairings may decline when a particle is in motion or close to large objects.
In the example I gave you earlier, when the number of pairings drops to 900 down from 2500, a particle would compute slower.
How much slower? It’s about the throughput of information. Let me explain.
If there’s 50 pieces of information collected in the previous moment and 50 now, what’s the throughput?
It’s 50. It’s the amount of information a particle collects every moment. It’s what comes into it in a unit of time, on average.
So even though there are 50 x 50 = 2500 pairings to process, the amount of information that came into particle at every moment is 50, so clearly, that’s the throughput.
But what about if there’s more information to process?
As I explained, the storage for past information will shrink and for the current it will expand.
For example, instead of 50/50, it may be 90/10. That’s 90 pieces from the here and now, and 10 pieces of information from the moment ago. What’s the throughput now?
It’s the equivalent as if there’s an equal amount of information from both the present and the previous moment. That’s the only true measure of throughput, because that’s what it averages over a period of time.
In this case, it would be based on 90 x 10 = 900 pairings, but taken as if there’s an equal information collected in both moments, instead of 10 and 90.
Apparently that would be as if there’s 30 pieces in each moment, because 30 x 30 = 900. So the throughput is 30.
What kind of particle might have such a low throughput?
It’s either moving real fast, or is near a very massive object. In both cases the amount of information it collects will increase.
And what if it doesn’t move or is far from large objects? In other words, how does it compare to a throughput of 50?
To reach that kind of information processing, a particle would have to spend about 1.67 moments on average (50 / 30 = 1.67 approximately).
So a particle in motion, or near a massive object, would take longer to do the same thing. Time slows down for it.
Since all of reality, including ourselves, is made of particles, time slows down for everything.
This is the same idea and the same story I’ve talked about before. Now I’ve shown how to put it in numbers, with a simple example.
The elementary calculation I did here is the core of the paper. It’s all about the throughput of information processing. That is, how much information is actually processed per unit of time.
As a concept, it can’t get much simpler.
What does it really mean for us that time can slow down?
First off, clocks are made of particles, so they would tick slower. But we’re made of particles too. So it’s everything that slows down.
Everything that is your life is the result of particles processing information. It’s not just clocks. All biological processes slow down too. You’re really aging less.
Consider an example of moving super fast right here on Earth.
In that case, only 20 seconds may pass for you, while the person standing still may measure 30 seconds. Time really does slow down.
You would have aged 30-20=10 seconds less. And the question is, how’s this different from physics as we know it?
In the mainstream theory, the story is a bit complicated: time for both you and the stationary person would technically slow down. So you would age 10 seconds less than the other person. And the other person would also age 10 seconds less than you.
Hmm. Obviously that’s impossible. It takes some mind boggling physics from General Relativity to sort this out. It’s too complex to describe it here, because... well it is mind boggling.
To top it off, Relativity says time is utterly relative, meaning there’s no “absolute” time. What’s absolute time?
That would be a clock which ticks constant and all other clocks are relative to it. In Relativity you could say everyone’s clock is both slower and faster relative to all other clocks.
That does sound impossible and, as I said, it takes a bit of mathematical magic to get out of that hole. I can’t explain it here in a way that would make sense easily, or maybe at all. If you can, your magic is awesome. But the math eventually works out even if it’s in a way that feels complicated and perplexing. At least that’s how I feel about it.
What about how all this works in my hypothesis?
The FTL Hypothesis is simpler that Relativity. You really aged 10 seconds less. The other guy really aged 10 seconds more, as you’d expect.
Whoever is moving relative to a large mass, in this case Earth, ages less. That’s why your clock ticked slower.
And unlike in Relativity, an absolute clock does exist. It’s a clock that’s very far away from all large mass - makes sense if large mass affects clocks.
So imagine a clock really, really far away from other objects. This clock ticks as a baseline. It ticks the fastest.
All other clocks in the Universe tick slower because they are closer to each other. Some tick a bit slower, and those moving fast may tick a lot slower.
This is the result of FTL Hypothesis and is much simpler in comparison.
Its results match both Special and General Relativity, but they do so in a way that isn’t perplexing. If you don’t know what these theories are, it’s all right. They are crown jewels of today’s physics.
If you know more about today’s physics, and want to get technical, here’s the skinny.
In Relativity, the slowing of clocks from moving fast depends on speed, and not on mass and distance. In my hypothesis it does. That’s it.
This dependency is super weak near Earth and whenever dealing with small objects, like particles. That’s why my hypothesis reduces to Relativity. But for a big starship, this dependency is huge. This about sums it up.
Now let me be clear. The current experimental data checks out with both my hypothesis and Relativity.
It’s just my hypothesis has predictions that contradict Relativity in some situations we never tried before. And those predictions await someone to either confirm or deny them.
Each particle in the Universe “projects” information about itself everywhere around it.
This information is like many layers of “shells” that are very dense near the particle and very sparse far away. Each “shell” has all of particle’s information scattered on it.
These “shells” move with the particle without any delay – they are a part of the particle.
The information on shells shifts randomly all around the particle, like a cloud of dust. This ensures another particle passing nearby will most likely “see” this information.
So, particles move through the “dust” of information that comes from all other particles, near and far. Those far contribute little information and those near contribute a lot.
They all “jump around” their positions in order to know where the information comes from.
A particle retains information it collected just a moment ago and combines it with information from right now. As a result, it changes motion.
Simple, isn’t it?
It’s in Equations (47) and (48) in the paper later on. All the equations here are from that paper, so don’t worry if these jump out at you. You can see how are these derived in just a bit.
They show the difference between current theories and the FTL Hypothesis, at least when it comes to achieving faster than light speeds.
In mainstream theories, if you speed away from Earth in a large ship, and you approach the speed of light, the clock on your ship will slow down relative to Earth. And you can never accelerate past the speed of light.
Here’s the equation from Relativity, which is the same as Equation (28) derived in my hypothesis:
As you can see, as speed approaches , time goes to infinity.
In practicality this means the time it takes to do anything will be so long, you’ll never finish it. You can’t go past the speed limit which is the speed of light as measured on Earth.
In the FTL Hypothesis, the clock onboard the ship will initially slow down, but then it will speed up to be more or less the same as on Earth.
Once sufficiently away from Earth, you can achieve speeds of twice, 10 times, 100 times (and more) the speed of light.
The reason lies in how much information there is to process.
I’ve spoken a great deal about it already, and how it declines far from large mass. It’s a variable that can be calculated.
I named it “information influence”, and it can have a value between 0 and 1. It’s close to 1 near large mass and close to 0 far away from it.
Here it is exactly from the paper that follows, expressed as the information of two objects and and the distance between them. Information influence is a measure of how much information object collects from object .
In this case it is for a large object ( ) far from massive bodies like Earth or the Sun ( ). As I explained in this book, it gets smaller the further out the object is, and I’ll show it is exactly:
and far away in deep space the slowing down of clocks becomes
(from 17 and 27)
(from 48 and 27)
This is different than the original Relativity equation in which time to do anything becomes infinitely long. Here, it doesn’t, in fact it stays the same in the best possible case.
That’s because time dilation depends not just on speed , but also on information influence .
This means the speed limit is much higher in deep space.
The above equation assumes you’re infinitely far away from other objects, which is the best possible case. You’ll never be like that. You may get very far away from Earth, but you’ll get closer to wherever you’re going.
So the exact maximum speed you can achieve depends on your itinerary. It’s different depending on where you are.
Mostly it depends on how long a time you spend really far away from any large mass.
In general this maximum speed can be much higher than , i.e. the speed of light on Earth.
Physical reality is driven by information solely possessed and used by elementary particles.
The new kind of information we hypothesize about is used internally by Nature. If such information exists, we can’t observe it or otherwise it wouldn’t be internal to Nature. However, we can use this model to explain relativistic, gravitational and quantum effects without said theories. In addition we can make new predictions (awaiting verification) that contradict existing theories and can serve as a proof of the hypothesis’ validity.
In here, “information” will mean Nature’s internal information, the existence of which we’re hypothesizing.
We will not use any known physics or otherwise this would be circular reasoning. For the same reason, we can’t use known information theories as they are based on known physics.
In this model, there is an object and a field of information around it.
We assume that physical processes are solely driven by information use.
The hypothesis is that physical objects intrinsically carry and share information in space around them and also use this shared information. The information in space around objects is the information field.
The only result of this use of information is change of motion.
The density of information field declines with the square of distance. Hence, information from far-away objects is sparse, and from close-by objects is dense.
When object changes its state of motion, so does its information field without any propagation delay. The information and the object are inseparable.
An object is a source of information and an information processor. Each object moves through field of information originating in all objects, meaning an object has access to information from all objects.
An object collects information at its location, uses it, and then changes the state of motion.
Change of motion is the result of change of information within an object. This means if information an object has access to changes in time, then and only then there can be a change of motion.
Since information exists in spatial location where object is, as much of that information as possible will be used by an object to change its motion. Hence if the information changes from one moment in time to the next, an object may change its motion.
Thus we will assume the existence of “previous” and “current” information held by an object at two consecutive moments in time, and denote as for “previous information” of object and for its “current information”.
When an object processes information, it does so by combining previous and current ones. The amount of new information that can be consumed to change object’s motion is a pairing of the two:
An object has some finite information storage for both previous and current information, so we write:
is the information capacity of an object and it doesn't change.
We assume the information storage for each moment in time is equal to the previous:
The throughput of information in an object will be clearly - this is the amount of current information an object has at any time. So, the throughput is, in relation to a new information consumed from (1):
If objects move relative to each other, then the amount of information seen by each object increases proportional to the relative speeds of objects. Each object has extra information that’s due to relative motion of all other objects.
Assume we have objects total, and the distance between some object and our object is , while their relative speed is .
The extra information in object is coming from all objects. Those objects are at some distances, so their information is divided by the square of distance between them.
Thus is the total information from all objects at the location of . Then, is the share of object 's information. For example, assume there is an information volume of 1,000,000 present at the location of object . If out of 1,000,000, there is 1000 from object , then the share of object 's information at the location of is , meaning one tenth of a percent of all information comes from .
Since all information has equal chance to be actually collected and used by , then the information volume used by that comes from is .
Next, the higher the relative speed between and , the more of information will be collected in a fixed unit of time, hence it will be , with constant moderating the impact of relative speed .
Finally, we sum up this across all objects, and we arrive at the most probable number of extra information an object collects from all objects:
From above equation (5), we get for relative increase of information:
As we explained leading up to (5), is the average percentage of information from object that object can use. This tells us how much object 's information “influences” . So we denote (5.1) as the information influence :
It is obvious that sum of all information influence on an object is exactly 1, i.e. the probability that an object will use all the information it can is 100%, which must always be true:
So we can write our equation (5.1), by using (5.2), as:
The relative change of information is proportional to the information influence and relative speeds.
It makes sense: if all objects “swim” in the information “soup”, then the volume of processing depends on the number and relative speeds of objects.
The extra information from (5) is stored into , i.e. into the current information. This means the storage for it will increase. But, in order for the total to remain constant, the storage for previous information will decrease:
The tilde denotes new storage allocated to current and previous information, in order for total to remain constant:
The throughput of information is now:
So the throughput of information processing is now, from equations (9) and (5.4):
This throughput represents the volume of information processed in a unit of time.
The consequence of (6) and (7) is that some previous information must be lost to make room for the extra information from (5).
This means that Nature’s computation is lossy. It means Nature can never, even in principle, be pre-calculated. Since the notion of information we’re discussing (i.e. Nature’s internal information) can never be observed by us in advance, it means Nature must be non-deterministic.
The throughput when all objects are at rest is the base throughput.
So we can now express the throughput in terms of a base one from (14):
We can also present this equation in terms of time it takes an object to change its motion, i.e. it’s responsiveness. Clearly when object does not have any extra information to process, it will perform the quickest.
Thus the higher the information throughput, the shorter the time needed to perform:
or upside down:
A special case is of a small object near an isolated large one. Let large object be and small object be . Information influence of a large object on small object is, per (5.2):
In here, is the distance of object to itself, which is always 1. So:
Here we assume large object is so huge (i.e. is so small) that the above expression is:
Then the above equation (17) for how slow the object behaves when moving relative to an isolated large object is, by using (20):
Similarly, to find out how slow the large object behaves when moving relative to a small object, we calculate the same from equation (18), just reverse indexes and :
The assumption of a large object being so huge is still true (i.e. is so huge), that the above expression becomes:
which says that large object will not behave any slower.
An example of the above scenario is a small object moving relative to a large one. The above says a small clock in motion relative to a large clock will slow down its ticking, while the large one won’t as much.
What is the constant ? We can figure that out if we assume that speed of an object is so high (we will denote it as ) that its processing of information slows down to zero, meaning it takes infinity to do anything, from equation (21):
For this to be true, it must be that and clearly:
So constant is the inverse of the maximum speed that a small object can achieve near a large isolated object. Since this speed is apparently a constant, we call it :
and the equation for how slow object behaves when moving relative to a large isolated object is now, from equations (21) and (27):
From this, we see that responsiveness of an object slows down in motion, and also that there must be a speed limit. This is consistent with the results of Special Theory of Relativity.
So far we didn't talk about whether information itself moves relative to an object. Our hypothesis is that it exists around an object, and move with the object, but does it move relative to the object?
It has to move, because any point in space should have equal chance to host any information. Otherwise, some point in space would have 100% chance of holding information, while many other points would be 0%.
If information moves, that has the same effect as if an object moves. So then, even if objects are at rest, they will behave slower. How much slower?
To find out, we will assume that information does not move, but objects are moving apart at some speed, all the way to infinity. By finding this equivalent speed at any given distance, we can figure it out.
We start from the basic equation (13) we have for extra information when objects move:
In case of two isolated objects (meaning ):
and knowing that , i.e. the speed of object relative to itself is always zero, and also putting in our speed from equation (27):
So here, is the equivalent speed at some distance that makes the slowing effect the same as does the moving of information.
Before we continue, let's make some obvious observations:
#1, the more information each object has, the more information there will be in space around them.
#2, the longer the time period, the more extra information from other objects there will be.
#3, the greater the distance between objects, the less information they will see from each other.
Let's focus on #3 here. If objects move apart by a small distance (meaning distance increases a bit), that's the same as if the equivalent speed between them decreased by a bit of . So the change of information is, if we assume information influence is constant:
Now let's go back to #1 and #2 above.
The more information there is from and the higher information influence of on is, the greater the change will be. The greater time period is, the more information will be seen. The further away from object we are, the less information there is (this is the inverse square of distance ). Finally, there is some constant . So the change of extra information from distance to distance should be:
We need to find the speed so that the above two equations amount to the same thing.
By equating the two, we have:
In here, we can take advantage of this trivial equation (i.e. distance equals speed multiplied by time):
so we have:
Our idea was to move from distance to infinity, and the relative speed then declines to zero, so we can integrate:
This is trivial, giving us:
And so the equivalent speed that makes for the same slower behavior as motion of information does, is:
Putting that in equation (28):
Since both and are constants, we shorten their multiplication to :
This means an object at rest at distance from object will perform tasks slower in accordance to the above equation. It means a clock would tick slower when close to another object. This is consistent with General Theory of Relativity. It is also clear that mass is the same as the information content of an object.
If processing of information drives change of motion, then we'll assume that processing power of an object has limits. Put differently, after performing certain number of information pairings, an object will be spent. Its information will still be seen by other objects, but it won't do anything anymore.
For that reason, we can also assume that an object will try to spend less of its processing power. One way to do that is try and move closer to other objects, permanently. The reason for this is simple from the above equation: objects perform less information processing when closer together, or in other words, everything they do is slower, buying time.
To calculate how would objects “huddle” together to reduce their informational footprint, we will investigate if this “huddling” motion can cost nothing in terms of informational throughput.
So, the equation for how exactly objects “huddle” together can be easily obtained from above. We will simply divide that equation (35) by (i.e. a small interval of time):
and knowing that speed is simply distance over time :
and we get this:
By using our previous convention , we get:
This says that objects should universally attract each other, and more specifically that the acceleration is inversely proportional to the square of distance and directly proportional to the information content. This is consistent with the Law of Gravity. Again, we see that mass is equivalent to information content.
It should be noted that this attraction may become irregular at great distances. One reason is because information content becomes very thin far away. Another is because information content shifts randomly around an object and so is not fixed and we can only speak of probabilities. It means the attraction between objects at great distances can have irregularities that do not comply with the equation (44).
Our equation (17):
says that a clock in motion would tick slower and that the exact slowdown is a N-body problem. Also, we predict that the degree to which a clock ticks slower involves information content and distance too, in general, except in the case of being near an isolated large object.
When a sizable object moves at high speed away from large object , meaning their distance becomes huge, we have from equation (19):
and for a 2-body model (ignoring the rest of the Universe):
What this means is we predict that the slowdown in clock’s ticks should decline with distance from a large object. This is an effect NOT predicted by current physics, and in fact, contradicts current physics.
The result of (48) can be tested this way: build a large probe and send it out, away from a massive object (such as Earth), on a trajectory that takes it far away from other massive objects (planets and the Sun). This is the opposite of what our probes currently do. Once far away enough, the clock on this probe will no longer slow down even as its speed away from the massive object increases, adjusted for the time slowing effect of large mass.
A positive result would corroborate the idea of information-driven Universe, and corroborate the prediction that a massive vessel far from large celestial bodies can accelerate to speeds faster than the speed of light.
We have derived a fairly involved mathematical representation of a Universe driven by information, without using any known physics. We have concluded that in such a Universe, there must be a speed limit. We have also concluded that clocks in motion would slow down their ticks, and that mere presence of other objects also slows down clocks. We have deduced that objects would intrinsically attract each other. And we did so in a mathematical form matching current science.
A basic conclusion of our hypothesis is that such a Universe is conceptually non-deterministic, or more specifically, Nature’s computations are lossy – information used to enact the very physical laws is commonly and constantly lost.
In addition, we have concluded that the slowing down of clocks due to speed will dissipate over distance and with clock mass, contradicting current science.
An experiment with a deep-space probe is suggested to corroborate the findings. If positive, it would indicate that away from large mass, the speed limit is much higher than near it, meaning that Faster Than Light travel through deep space is possible, and more likely so in a massive vessel.
Why anything happens like it does? Perhaps to answer this question is best to start with another question. Can anything ever happen without a reason? Maybe. But if it does, that’s typically because we don’t know what the reason is. Usually it's always found, even if belatedly. So maybe everything does happen for a reason.
And the definition of reason … is applying logic to facts. What are facts? Information by another name. Is it then so hard to believe Universe runs by processing information? Ultimately, how else would it run? For example, think of a simple case of two electrons repelling one another. What is the reason for this to happen?
That question goes beyond just measuring and describing the phenomenon. To put in practical terms, what underlying logic do these electrons follow, and why? What logic any particle follows in any situation? And what is it they must possess in order to work?
I think there’s only one answer that doesn’t leave us in the dark.
It’s simple. If there is no information to fuel logic, all that’s left is magic. We know this to be true for everything. So why not for the very basis of our reality? That may be the best and ultimately the only explanation why Nature is fundamentally based on information processing. For the sake of argument, however, let’s consider how the two approaches fare in comparison.
The currently accepted view is that Universe runs by means of physical laws. There are laws, and everything follows them. This includes every single particle in Nature. Let me ask you: aren’t today’s scientists mistaking particles for people? Because people follow laws. Well, in general they do. Always have, for thousands of years.
It is in our nature to create societies ruled by some kind of decree. We form governments instituting regulations that must be followed by everyone. All of us, including physicists, live in a system of governance. I am saying it plays an overarching role on many levels, from an ordinary frame of mind to the very sustenance of research.
So is it a wonder the concept of governing crept up into physics theories as well? The general idea of societal laws was carried over to Nature’s own. That carryover is a bit obvious, but rarely discussed. Perhaps because it has deep roots in our subconscious, it became virtually invisible to critique.
The point is, it’s very easy to think everything follows laws, including electrons and all other particles. In my opinion that’s not a relevant basis for explaining Nature. That’s a basis to run human society. So if you compare “following laws” with “applying logic to facts”, how do you think reality works? I’d bet on the latter.
Pondering on the concept of physical laws, more questions emerge. Why are there laws? And who wrote these laws? Conceivably a higher being, or maybe not, but modern physics never really answers this question. There’s perhaps something quasi-religious about all that. Just think about the concept of “commandments” and more than a passing resemblance to the physical laws approach.
Maybe that’s not such a wonder, because science evolved in time, and often together with religion. Sometimes even under the auspices of religion. It’s not a big surprise there’s somewhat of a mystical air about the notion of physical laws. Perhaps these laws are entirely natural, or perhaps ultimately they were proclaimed into existence by someone or something higher. I am not dismissing it at all. That’s not the point.
The point is, the notion of laws is very… human. It doesn’t say anything about how the laws work. And that is the point! The FTL Hypothesis tackles the question of how. And the answer offered is by applying logic to facts, or in other words, by processing information.
There’s still a matter of “laws” hidden in this concept too. Because ultimately these physical “laws” have their counterpart in the way information is processed. A “law” would be a natural “algorithm” that processes information.
Even though the notion of a law can’t quite be extinguished, I think this makes more sense, because there’s an internal cause-and-effect. More importantly, the view of physical laws is now broken down to processing information. And this computing operates on elementary facts. Each fact clearly originates in material entities like particles.
This is a model beyond the ever-contemporary look and feel in physics. It doesn’t emulate, duplicate or mimic our societal structures or the system of governance. It’s a simpler way of looking at things, I believe, and one that ultimately makes more sense.
I said in the end, physical laws correlate to natural “algorithms” that process information. These “algorithms” can’t really be observed, and the term is only loosely applied. That’s because information processing is internal to Nature, as I explained earlier.
However, this understanding allows us to model the Universe. And this model matches reality as we see it and even yields new predictions. So how, in broad terms, did we create this model without knowing the internal workings?
To understand this, consider a computer program that does something for you. Assume you don’t have access to its source code. Now, by using this program, you can learn its functionality. But you can also get a pretty good idea about how it operates. That’s true even though you don’t know exactly how it works. Why’s that?
Well, for one, you can see what data is necessary to produce some results. You can make a good guess about the general way it works by observing the program’s input and output and the time it takes to get you those results.
The point is, even though you will never know exactly how the program works, there’s a lot you can know, just by using elementary logic. Very roughly, this is how I developed the hypothesis.
In my case, I am more interested in how fast and how accurate Nature’s processing can possibly be. And I am very much concerned with the simplicity and efficiency of it. I also want to know the boundaries of information processing, meaning a basic presumption that everything has limits. Everything has capacity that cannot grow to infinity.
And finally, my primary concern is the throughput of using information by Nature. To know throughput, you don’t need to know the internal workings, only what comes in and out, and how fast.
These are elementary notions, but when applied they yield equations. These equations match almost exactly physical laws we already know. Because of this match, and because no actual physical laws were used, I think I am right in this approach.
Ultimately though, it boils down to what I said a few paragraphs above: is it an elementary truth that logic applied to facts is why anything interesting happens in the Universe? I think it’s hard to say “No” to this, and that’s why I think information processing drives everything.
I talked about limitations of processing information. For instance, failing to pick it up because it’s spread like “dust” in space. Or “jitters” in random directions that make it unpredictable. Or losing it because the storage is limited.
There’s a whole host of constraints that make information processing truly fickle. So if reality is essentially an inaccurate computer, is our participation in it just a combination of rational computation and “irrational” randomness?
Is the former “cold” information-processing and the latter “warm” spice of volatility? These aspects bring the realm of super tiny into our own existence. We’re not immune to imperfections on a level so below our senses.
We live our lives making rational decisions for the most part. We have a code of conduct which can be traced back to the benefits of compliance and consequences of dissent. It’s something that can be modeled and turned into a program, or effectively some sort of information-processing.
However, what of those decisions that escape the calculated realm? The unexpected and personal twists and turns? Are they just the result of inaccurate and unpredictable computation? Maybe to some extent. But maybe the opposite is true, too.
Let me ask you this: are we consciously seeking out the randomness in ourselves to alleviate the burdens of free will? Perhaps if we can’t decide, the remedy is to let Nature play dice. Perhaps we know intuitively that’s the only true fair game.
There would be lots of things we see that defy explanation, but can be accounted for by reality’s computation being limited, inaccurate, and random as such. The world each of us sees is made of many events and many scenes played by fickle actors, both animate and inanimate. I think it’s reasonable to say our lives are a complex information system.
The more evolved an information system is, the tougher are the decisions it must make. That’s because deciding what to do next must work with more variables, instead of fewer. The greater the purpose of a system, the more it is at odds with the limitations imposed by imprecise computation of reality itself.
This forces an information-based system to use even more data and more complex algorithms. In other words, there are now many more viable points for random failure to alter the outcome. And sometimes the alteration is radical and unpredictable, especially if multiple random failures occur.
What I just told you is a birds-eye overview of how an information-based reality defies the concept of control. By that I mean the wish for control by its denizens, which is us and possibly others out there. However that desire unwinds, sooner or later it’s met with glitches that undermine it, no matter the perfection of undertaking.
What we call a “glitch” is at the very core of reality, built into it from the bottom-up. As I explained, it comes from limited capabilities of reality itself. It sits somewhere between predictability and control on one hand, and uncertainty and randomness on the other. There’s no way to fully compensate, even after the cleverest of contraptions.
Perhaps contemplating the world in your laptop can illuminate some of this a bit more. Think about the simulated reality. No matter how many small details it provides, there’s a barrier of discerning.
A determined simulated person will one day find out there is a detail small enough where precision breaks down. Why is that? It’s for the same reason as in our own reality. Presumably the computer simulating this curious person won’t have infinite memory at its disposal.
So, at some point, a very small detail will look “fuzzy”, simply because the simulation runs out of memory. Meaning, at some level, even the most powerful simulated microscope will only see an indiscernible dot. This final dot is simply a bit in computer memory and can’t be broken down any more. Increasing simulated magnification won’t help because that would imply there’s an infinite amount of unpredictable detail requiring unlimited memory storage.
There’s more similarities. The computer running a simulation will have to round up, or round down its results due to its own constraints. In other words, limited memory means imperfect computation. Pi can be computed only to so many decimals. After that it’s just a blur. And depending on how much memory is available at the time, the precision may vary, even if everything else stays the same. It means Pi may have different values at different times.
Whenever a simulation runs into such a “blur”, a decision can’t really be made and a random outcome is likely to occur. It means many simulated events happening at once will see a confluence of inaccuracies. Each event will be an approximate result, and a complex simulated scene will seemingly turn out a bit random, even if repeated exactly.
People living in this world, real or not, may be puzzled by this, and clearly they wouldn’t be alone. We’re finding the same in our own Universe. So, whether we’re simulated or not, our reality may work in a way that follows conceptually similar limitations.
Whether natural or inspired, those limitations shape our world. And they sometimes necessitate Nature to work in ways that current science can’t quite account for. With the computational view of the Universe, however, these ways are not just simple to explain, but also easy to predict.
We live in the information age. Is my hypothesis just emulating that? Am I seeing “computers” where none exist? Have I fallen into a “computer trap”? Is this just a knee-jerk reaction to living in a digital world?
I’d say it’s the other way around. The reason why we have computers, and beyond that, why living beings have the capacity to remember and use information, is because Nature is making it easy.
I am saying the Universe is based on information processing, and so is everything in it, including us and our creations. We should take a clue from mother Nature.
The FTL Hypothesis, or a similar hypothesis, could have been created hundreds of years ago. There is nothing here that uses computer science as we know it. Moreover, I cannot use anything from modern computer science. If I did, I would be using the existing laws of physics to explain allegedly “something new”. It would be nothing but circular reasoning.
Perhaps my idea comes now and not hundreds of years ago because the age of computers shone light on the power of information. We’re all aware of it now. But this power has always been there, even before humans existed, and even before life on Earth existed.
All things, alive or inanimate, follow some logic, and logic feeds on information. It’s not a stretch to imply use of information drives everything. In fact, it might be more accurate to say we think higher of ourselves than we perhaps deserve. We are in love with our own abilities. And now we’re in love with our creations, the digital and computerized world.
Nature always had the ability to process information. Our intellects as well as our computers are maybe just the current peak of information use in Nature. After all, we are not supernatural, and neither is anything we make.
So, is my hypothesis an attempt to mold the Universe after our devices? Or is the truth that we, and our clever products, are mimicking Nature we’re a part of? Not for the purpose of mimicking of course, but as useful tools. Our creations are just an extension of how the Universe already works.
Do I imagine there’s a computer program running within elementary particles, such as electrons or protons? Do I imagine every such particle has a computer chip in it? No, not at all. So what is it that actually runs Nature?
No one will ever know for sure. I’ve talked about this. What makes it tick is an internal affair and off limits. However, some things can be surmised based on elementary logic. I mentioned ways to do that, and the formal hypothesis that follows will take it to the level of math equations.
If you accept the premise I give you, do you think the Universe is a computer? That depends on your definition of it. A label of a “computer” is really somewhat overused and jaded. It’s also self-aggrandizing. As a species we just learned how to construct computational machines.
True, we’re very impressed with the societal impact these machines have. And rightfully so. But an entity that uses information isn’t something we invented. Any biological system also processes information to achieve its function. Think of a brain as one example. As another, any chemical reaction is just combining different substances, each of which can be described in some way and thus carry information.
Effectively, everything in Nature takes some input data and produces output data. That’s a very essential definition of a “computer”. If Nature works like this externally, meaning we see it as such, it’s not a big step to say it may work this way internally.
There’s a reason to believe the Universe is computational on every level. Why is that? It’s simple. To explain, I’ll quickly go back to my hypothesis and to the simulation analogy.
Remember the difference between the observable and internal information? Observable is anything we can see and measure. That’s how Nature looks to us. Internal is what Nature uses internally so it works as it does.
I already said it’s reminiscent of the way things are in the simulated world on your laptop. In there, observable information is everything simulated people see and measure. And internal information is all the data inside the computer that makes them, like the Operating System and software that create simulated reality. But note ultimately it’s all just a single physical computer.
Now, if you think about Nature and how it works, would it be, at least conceptually, much different? Wouldn’t it all be a single information processing system, just like your laptop?
I mentioned everything in nature we can observe, such as biological and chemical processes for instance, works like a computer. If everything observable is effectively a kind of computer in a broad sense of the word, then why wouldn’t Nature’s internal workings be like that too, just like it is with the laptop you use to run a simulated universe?
I think it would make sense to be this way, rather than to have two worlds, one internal and one external. One of those worlds would be just for show, for us to see. The other would be behind the scenes. Why would they be different in how they work? I don’t think they are.
Still, given how our physics is structured today, the idea of an informational Universe looks like a big step.
Maybe it’s because physics evolved as human industry did. It’s based on “forces”, “particles” and “waves” because those are the very basic things humans saw around them for millennia. For example, think of war, an unfortunate constant throughout history. It’s all about using “force”. The original conflict is people throwing things at each other. Perhaps rocks, or bullets, or worse. A generic name for them might as well be “particles”. As another example, water is all around us, and the concept of “waves” is everywhere.
We may have forgotten we have already extended these common notions from our world to that of Nature’s workings. But as I’ve explained, the only thing all of these concepts reduce to is information.
Given before modern age humans didn’t have a true industry of information, perhaps it explains why such an overwhelming concept got monumentally overlooked. It’s perfectly okay to look into the world around us and contemplate it as a basis. That’s what physicists have always done.
I say it’s time to extend that same strategy to a concept of information processing. That’s what the FTL Hypothesis is about.
Is the concept of information processing worthy of consideration? As in, worthy of being thought as fundamental? I said today’s physics is based on notions such as forces, particles and waves. These are ordinary concepts. So is mainstream science already guilty of using everyday common stuff to explain how Nature works?
Yes and no. Yes, clearly people have a tendency to use what they see around them. No, because what works should be used. And some concepts, like particles, waves, and yes, information processing too, may just be a repetitive theme in Nature.
These concepts may be in front of our eyes, but they might also be how everything works. To deny a thought just because it’s common would not be productive.